File size: 159,084 Bytes
44a8930
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
# Diagnosing And Fixing Manifold Overfitting In Deep Generative Models

Gabriel Loaiza-Ganem *gabriel@layer6.ai* Layer 6 AI
Brendan Leigh Ross brendan@layer6.ai Layer 6 AI
Jesse C. Cresswell jesse@layer6.ai Layer 6 AI
Anthony L. Caterini *anthony@layer6.ai* Layer 6 AI
Reviewed on OpenReview: *https: // openreview. net/ forum? id= 0nEZCVshxS*

## Abstract

Likelihood-based, or *explicit*, deep generative models use neural networks to construct flexible high-dimensional densities. This formulation directly contradicts the manifold hypothesis, which states that observed data lies on a low-dimensional manifold embedded in high-dimensional ambient space. In this paper we investigate the pathologies of maximumlikelihood training in the presence of this dimensionality mismatch. We formally prove that degenerate optima are achieved wherein the manifold itself is learned but not the distribution on it, a phenomenon we call *manifold overfitting*. We propose a class of two-step procedures consisting of a dimensionality reduction step followed by maximumlikelihood density estimation, and prove that they recover the data-generating distribution in the nonparametric regime, thus avoiding manifold overfitting. We also show that these procedures enable density estimation on the manifolds learned by *implicit* models, such as generative adversarial networks, hence addressing a major shortcoming of these models. Several recently proposed methods are instances of our two-step procedures; we thus unify, extend, and theoretically justify a large class of models.

## 1 Introduction

We consider the standard setting for generative modelling, where samples {xn}
N
n=1 ⊂ R
D of high-dimensional data from some unknown distribution P
∗ are observed, and the task is to estimate P
∗. Many deep generative models (DGMs) (Bond-Taylor et al., 2021), including variational autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014; Ho et al., 2020; Kingma et al., 2021) and variants such as adversarial variational Bayes (AVB) (Mescheder et al., 2017), normalizing flows (NFs) (Dinh et al., 2017; Kingma & Dhariwal, 2018; Behrmann et al., 2019; Chen et al., 2019; Durkan et al., 2019; Cornish et al., 2020), energy-based models
(EBMs) (Du & Mordatch, 2019), and continuous autoregressive models (ARMs) (Uria et al., 2013; Theis &
Bethge, 2015), use neural networks to construct a flexible density trained to match P
∗ by maximizing either the likelihood or a lower bound of it. This modelling choice implies the model has D-dimensional support,1 thus directly contradicting the manifold hypothesis (Bengio et al., 2013), which states that high-dimensional data is supported on M, an unknown d-dimensional embedded submanifold of R
D, where *d < D*.

1This is indeed true of VAEs and AVB, even though both use low-dimensional latent variables, as the observational model being fully dimensional implies every point in RD is assigned strictly positive density, regardless of what the latent dimension is.
1 There is strong evidence supporting the manifold hypothesis. Theoretically, the sample complexity of kernel density estimation is known to scale exponentially with ambient dimension D when no low-dimensional structure exists (Cacoullos, 1966), and with intrinsic dimension d when it does (Ozakin & Gray, 2009). These results suggest the complexity of learning distributions scales exponentially with the intrinsic dimension of their support, and the same applies for manifold learning (Narayanan & Mitter, 2010). Yet, if estimating distributions or manifolds required exponentially many samples in D, these problems would be impossible to solve in practice. The success itself of deep-learning-based methods on these tasks thus supports the manifold hypothesis. Empirically, Pope et al. (2021) estimate d for commonly-used image datasets and find that, indeed, it is much smaller than D.

A natural question arises: *how relevant is the aforementioned modelling mismatch?* We answer this question by proving that when P
∗is supported on M, maximum-likelihood training of a flexible D-dimensional density results in M itself being learned, but not P
∗. Our result extends that of Dai & Wipf (2019) beyond VAEs to all likelihood-based models and drops the empirically unrealistic assumption that M is homeomorphic to R
d
(e.g. one can imagine the MNIST (LeCun, 1998) manifold as having 10 connected components, one per digit).

This phenomenon - which we call *manifold overfitting* - has profound consequences for generative modelling.

Maximum-likelihood is indisputably one of the most important concepts in statistics, and enjoys well-studied theoretical properties such as consistency and asymptotic efficiency under seemingly mild regularity conditions
(Lehmann & Casella, 2006). These conditions can indeed be reasonably expected to hold in the setting of
"classical statistics" under which they were first considered, where models were simpler and available data was of much lower ambient dimension than by modern standards. However, in the presence of d-dimensional manifold structure, the previously innocuous assumption that there exists a ground truth D-dimensional density cannot possibly hold. Manifold overfitting thus shows that DGMs do not enjoy the supposed theoretical benefits of maximum-likelihood, which is often regarded as a principled objective for training DGMs, because they will recover the manifold but not the distribution on it. We highlight that manifold overfitting is a problem with maximum-likelihood itself, and thus universally affects all explicit DGMs.

In order to address manifold overfitting, we propose a class of two-step procedures, depicted in Fig. 1.

The first step, which we call *generalized autoencoding*,
2reduces the dimension of the data through an encoder g : R
D → R
d while also learning how to map back to M through a decoder G : R
d → R
D.

In the second step, maximum-likelihood estimation is performed on the low-dimensional representations
{g(xn)}
N
n=1 using a DGM. Intuitively, the first step removes the dimensionality mismatch in order to avoid manifold overfitting in the second step. This intuition is confirmed in a second theoretical result where we prove that, given enough capacity, our twostep procedures indeed recover P
∗in the infinite data limit while retaining density evaluation. Just as manifold overfitting pervasively affects likelihood-based DGMs, our proposed two-step procedures address this issue in an equally broad manner. We also identify DGMs that are instances of our procedure class. Our methodology thus results in novel models, and provides a unifying perspective and theoretical justification for all these related works.

Figure 1: Depiction of our two-step procedures. In the

![1_image_0.png](1_image_0.png)

first step, we learn to map from M to R
dthrough g, and to invert this mapping through G. In the second step, we perform density estimation (green density on the right) on the dataset encoded through g. Our learned distribution on M (shades of green on the spiral) is given by pushing forward the density from the second step through G.

We also show that some implicit models (Mohamed & Lakshminarayanan, 2016), e.g. generative adversarial networks (GANs) (Goodfellow et al., 2014), can be made into generalized autoencoders. Consequently, in addition to preventing manifold overfitting on explicit models, our two-step procedures enable density evaluation for implicit models, thus addressing one of their main limitations. We show that this newly obtained ability of implicit models to perform density estimation can be used empirically to perform out-of-distribution

2Our generalized autoencoders are unrelated to those of Wang et al. (2014).
(OOD) detection, and we obtain very promising results. To the best of our knowledge principled density estimation with implicit models was previously considered impossible. Finally, we achieve significant empirical improvements in sample quality over maximum-likelihood, strongly supporting our theoretical findings. We show these improvements persist even when accounting for the additional parameters of the second-step model, or when adding Gaussian noise to the data as an attempt to remove the dimensionality mismatch that causes manifold overfitting.

## 2 Related Work And Motivation

Manifold mismatch It has been observed in the literature that R
D-supported models exhibit undesirable behaviour when the support of the target distribution has complicated topological structure. For example, Cornish et al. (2020) show that the bi-Lipschitz constant of topologically-misspecified NFs must go to infinity, even without dimensionality mismatch, explaining phenomena like the numerical instabilities observed by Behrmann et al. (2021). Mattei & Frellsen (2018) observe VAEs can have unbounded likelihoods and are thus susceptible to similar instabilities. Dai & Wipf (2019) study dimensionality mismatch in VAEs and its effects on posterior collapse. These works motivate the development of models with low-dimensional support. Goodfellow et al. (2014) and Nowozin et al. (2016) model the data as the pushforward of a low-dimensional Gaussian through a neural network, thus making it possible to properly account for the dimension of the support. However, in addition to requiring adversarial training - which is more unstable than maximum-likelihood (Chu et al., 2020) - these models minimize the Jensen-Shannon divergence or f-divergences, respectively, in the *nonparametric* setting (i.e. infinite data limit with sufficient capacity),
which are ill-defined due to dimensionality mismatch. Attempting to minimize Wasserstein distance has also been proposed (Arjovsky et al., 2017; Tolstikhin et al., 2018) as a way to remedy this issue, although estimating this distance is hard in practice (Arora et al., 2017) and unbiased gradient estimators are not available. In addition to having a more challenging training objective than maximum-likelihood, these *implicit* models lose a key advantage of *explicit* models: density evaluation. Our work aims to both properly account for the manifold hypothesis in likelihood-based DGMs while retaining density evaluation, and endow implicit models with density evaluation.

NFs on manifolds Several recent flow-based methods properly account for the manifold structure of the data. Gemici et al. (2016), Rezende et al. (2020), and Mathieu & Nickel (2020) construct flow models for prespecified manifolds, with the obvious disadvantage that the manifold is unknown for most data of interest. Brehmer & Cranmer (2020) propose injective NFs, which model the data-generating distribution as the pushforward of a d-dimensional Gaussian through an injective function G : R
d → R
D, and avoid the change-of-variable computation through a two-step training procedure; we will see in Sec. 5 that this procedure is an instance of our methodology. Caterini et al. (2021) and Ross & Cresswell (2021) endow injective flows with tractable change-of-variable computations, the former through automatic differentiation and numerical linear algebra methods, and the latter with a specific construction of injective NFs admitting closed-form evaluation. We build a general framework encompassing a broader class of DGMs than NFs alone, giving them low-dimensional support without requiring injective transformations over R
d.

Adding noise Denoising approaches add Gaussian noise to the data, making the D-dimensional model appropriate at the cost of recovering a noisy version of P
∗(Vincent et al., 2008; Vincent, 2011; Alain &
Bengio, 2014; Meng et al., 2021; Chae et al., 2021; Horvat & Pfister, 2021a;b; Cunningham & Fiterau, 2021). In particular, Horvat & Pfister (2021b) show that recovering the true manifold structure in this case is only guaranteed when adding noise orthogonally to the tangent space of the manifold, which cannot be achieved in practice when the manifold itself is unknown. In the context of score-matching (Hyvärinen, 2005), denoising has led to empirical success (Song & Ermon, 2019; Song et al., 2021). In Sec. 3.2 we show that adding small amounts of Gaussian noise to a distribution supported on a manifold results in highly peaked densities, which can be hard to learn. Zhang et al. (2020b) also make this observation, and propose to add the same amount of noise to the model itself. However, their method requires access to the density of the model after having added noise, which in practice requires a variational approximation and is thus only applicable to VAEs. Our first theoretical result can be seen as a motivation for any method based on adding noise to the data (as

![3_image_0.png](3_image_0.png)

![3_image_1.png](3_image_1.png)

Figure 2: **Left panel**: P
∗(green); pt(x) = 0.3 · N (x; −1, 1/t) + 0.7 · N (x; 1, 1/t) (orange, dashed) for t = 5, which converges weakly to P
∗ as t → ∞; and p
′ t
(x) = 0.8 · N (x; −1, 1/t) + 0.2 · N (x; 1, 1/t) (purple, dotted)
for t = 5, which converges weakly to P
† = 0.8δ−1 + 0.2δ1 while getting arbitrarily large likelihoods under P

∗, i.e. p
′ t
(x) → ∞ as t → ∞ for x ∈ M; Gaussian VAE density (blue, solid). **Right panel**: Analogous phenomenon with D = 2 and d = 1, with the blue density "spiking" around M in a manner unlike P
∗(green)
while achieving large likelihoods.
attempting to address manifold overfitting), and our two-step procedures are applicable to all likelihood-based DGMs. We empirically verify that simply adding Gaussian noise to the data and fitting a maximum-likelihood DGM as usual is not enough to avoid manifold overfitting in practice. Our results highlight that manifold overfitting can manifest itself empirically even when the data is close to a manifold rather than exactly on one, and that naïvely adding noise does not fix it. We hope that our work will encourage further advances aiming to address manifold overfitting, including ones based on adding noise.

## 3 Manifold Overfitting 3.1 An Illustrative Example

Consider the simple case where D = 1, d = 0, M = {−1, 1}, and P
∗ = 0.3δ−1 + 0.7δ1, where δx denotes a point mass at x. Suppose the data is modelled with a mixture of Gaussians p(x) = λ · N (x; m1, σ2) + (1 −
λ) · N (x; m2, σ2) parameterized by a mixture weight λ ∈ [0, 1], means m1, m2 ∈ R, and a shared variance σ 2 ∈ R>0, which we will think of as a flexible density. This model can learn the correct distribution in the limit σ 2 → 0, as shown on the left panel of Fig. 2 (dashed line in orange). However, arbitrarily large likelihood values can be achieved by other densities - the one shown with a purple dotted line approximates a distribution P
† on M which *is not* P
∗ but nonetheless has large likelihoods. The implication is simple:
maximum-likelihood estimation will not necessarily recover the data-generating distribution P
∗. Our choice of P

†(see figure caption) was completely arbitrary, hence any distribution on M other than δ−1 or δ1 could be recovered with likelihoods diverging to infinity. Recovering P
∗is then a coincidence which we should not expect to occur when training via maximum-likelihood. In other words, we should expect maximum-likelihood to recover the manifold (i.e. m1 = ±1, m2 = ∓1 and σ 2 → 0), but not the distribution on it (i.e. λ /∈ {0.3, 0.7}).

We also plot the density learned by a Gaussian VAE (see App. C.2) in blue to show this issue empirically.

While this model assigns some probability outside of {−1, 1} due to limited capacity, the probabilities assigned around −1 and 1 are far off from 0.3 and 0.7, respectively; even after quantizing with the sign function, the VAE only assigns probability 0.53 to x = 1.

The underlying issue here is that M is "too thin in R
D" (it has Lebesgue measure 0), and thus p(x) can
"spike to infinity" at every x ∈ M. If the dimensionalities were correctly matched this could not happen, as the requirement that p integrate to 1 would be violated. We highlight that this issue is not only a problem with data having intrinsic dimension d = 0, and can happen whenever *d < D*. The right panel of Fig. 2 shows another example of this phenomenon with d = 1 and D = 2, where a distribution P
∗(green curve) is poorly approximated with a density p (blue surface) which nonetheless would achieve high likelihoods by
"spiking around M". Looking ahead to our experiments, the middle panel of Fig. 4 shows a 2-dimensional EBM suffering from this issue, spiking around the ground truth manifold on the left panel, but not correctly recovering the distribution on it. The intuition provided by these examples is that if a flexible D-dimensional density p is trained with maximum-likelihood when P
∗is supported on a low-dimensional manifold, it is possible to simultaneously achieve large likelihoods while being close to any P
†, rather than close to P
∗. We refer to this phenomenon as *manifold overfitting*, as the density will concentrate around the manifold, but will do so in an incorrect way, recovering an arbitrary distribution on the manifold rather than the correct one. Note that the problem is not that the likelihood can be arbitrarily large (e.g. intended behaviour in Fig. 2), but that large likelihoods can be achieved *while not recovering* P
∗. Manifold overfitting thus calls into question the validity of maximum-likelihood as a training objective in the setting where the data lies on a low-dimensional manifold.

## 3.2 The Manifold Overfitting Theorem

We now formalize the intuition developed so far. We assume some familiarity with measure theory (Billingsley, 2008) and with smooth (Lee, 2013) and Riemannian manifolds (Lee, 2018). Nonetheless, we provide a measure theory primer in App. A, where we informally review relevant concepts such as absolute continuity of measures (≪), densities as Radon-Nikodym derivatives, weak convergence, properties holding almost surely with respect to a probability measure, and pushforward measures. We also use the concept of Riemannian measure (Pennec, 2006), which plays an analogous role on manifolds to that of the Lebesgue measure on Euclidean spaces. We briefly review Riemannian measures in App. B.1, and refer the reader to Dieudonné
(1973) for a thorough treatment.3 We begin by defining a useful condition on probability distributions for the following theorems, which captures the intuition of "continuously spreading mass all around M". Definition 1 (Smoothness of Probability Measures): Let M be a finite-dimensional C
1 manifold, and let P be a probability measure on M. Let g be a Riemannian metric on M and µ
(g)
M the corresponding Riemannian measure. We say that P is *smooth* if P ≪ µ
(g)
M and it admits a continuous density p : M → R>0 with respect to µ
(g)
M .

Note that smoothness of P is independent of the choice of Riemannian metric g (see App. B.1). We emphasize that this is a weak requirement, corresponding in the Euclidean case to P admitting a continuous and positive density with respect to the Lebesgue measure, and that it is not required of P
∗in our first theorem below.

Denoting the Lebesgue measure on R
D as µD, we now state our first result.

Theorem 1 (Manifold Overfitting): Let M ⊂ R
D be an analytic d-dimensional embedded submanifold of R
D with *d < D*, and P
† a smooth probability measure on M. Then there exists a sequence of probability measures (Pt)∞
t=1 on R
D such that:
1. Pt → P
† weakly as t → ∞.

2. For every t ≥ 1, Pt ≪ µD and Pt admits a density pt : R
D → R>0 with respect to µD such that:
(a) limt→∞
pt(x) = ∞ for every x ∈ M.

(b) limt→∞
pt(x) = 0 for every x /∈ cl(M), where cl(·) denotes closure in R
D.

Proof sketch: We construct Pt by convolving P
† with 0 mean, σ 2 t ID covariance Gaussian noise for a sequence
(σ 2 t
)∞
t=1 satisfying σ 2 t → 0 as t → ∞, and then carefully verify that the stated properties of Pt indeed hold.

See App. B.2 for the full formal proof.

Informally, part 1 says that Pt can get arbitrarily close to P
†, and part 2 says that this can be achieved with densities diverging to infinity on all M. The relevance of this statement is that large likelihoods of a model do not imply it is adequately learning the target distribution P
∗, showing that maximum-likelihood is not a valid objective when data has low-dimensional manifold structure. Maximizing 1N
PN
n=1 log p(xn), or EX∼P∗ [log p(X)] in the nonparametric regime, over a D-dimensional density p need not recover P
∗: since P
∗
is supported on M, it follows by Theorem 1 that not only can the objective be made arbitrarily large, but that this can be done while recovering any P
†, which need not match P
∗. The failure to recover P
∗is caused

3See especially Sec. 22 of Ch. 16. Note Riemannian measures are called Lebesgue measures in this reference.
by the density being able to take arbitrarily large values on all of M, thus *overfitting to the manifold*. When p is a flexible density, as for many DGMs with universal approximation properties (Hornik, 1991; Koehler et al., 2021), manifold overfitting becomes a key deficiency of maximum-likelihood - which we fix in Sec. 4.

Note also that the proof of Theorem 1 applied to the specific case where P
† = P
∗formalizes the intuition that adding small amounts of Gaussian noise to P
∗results in highly peaked densities, suggesting that the resulting distribution, which denoising methods aim to estimate, might be empirically difficult to learn. More generally, even if there exists a ground truth D-dimensional density which allocates most of its mass around M, this density will be highly peaked. In other words, even if Theorem 1 does not technically apply in this setting, it still provides useful intuition as manifold overfitting might still happen in practice. Indeed, we empirically confirm in Sec. 6 that even if P
∗is only "very close" to M, manifold overfitting remains a problem.

Differences from regular overfitting Manifold overfitting is fundamentally different from regular overfitting. At its core, regular overfitting involves memorizing observed datapoints as a direct consequence of maximizing the finite-sample objective 1N
PN
n=1 log p(xn). This memorization can happen in different ways, e.g. the empirical distribution PˆN =
1 N
PN
n=1 δxn could be recovered.4 Recovering PˆN requires increased model capacity as N increases, as new data points have to be memorized. In contrast, manifold overfitting only requires enough capacity to concentrate mass around the manifold. Regular overfitting can happen in other ways too: a classical example (Bishop, 2006) being p(x) = 12N (x; 0, ID) + 12N (x; x1, σ2ID), which achieves arbitrarily large likelihoods as σ 2 → 0 and only requires memorizing x1. On the other hand, manifold overfitting does not arise from memorizing datapoints, and unlike regular overfitting, can persist even when maximizing the nonparametric objective EX∼P∗ [log p(X)]. Manifold overfitting is thus a more severe problem than regular overfitting, as it does not disappear in the infinite data regime. This property of manifold overfitting also makes detecting it more difficult: an unseen test datapoint xN+1 ∈ M will still be assigned very high likelihood - in line with the training data - under manifold overfitting, yet very low likelihood under regular overfitting. Comparing train and test likelihoods is thus not a valid way of detecting manifold overfitting, once again contrasting with regular overfitting, and highlighting that manifold overfitting is the more acute problem of the two.

A note on divergences Maximum-likelihood is often thought of as minimizing the KL divergence KL(P
∗||P)
over the model distribution P. Naïvely one might believe that this contradicts the manifold overfitting theorem, but this is not the case. In order for KL(P
∗||P) < ∞, it is required that P
∗ ≪ P, which does not happen when P
∗is a distribution on M and P ≪ µD. For example, KL(P
∗||Pt) = ∞, for every t ≥ 1 even if EX∼P∗ [log pt(X)] varies in t. In other words, minimizing the KL divergence is not equivalent to maximizing the likelihood in the setting of dimensionality mismatch, and the manifold overfitting theorem elucidates the effect of maximum-likelihood training in this setting. Similarly, other commonly considered divergences - such as f-divergences - cannot be meaningfully minimized. Arjovsky et al. (2017) propose using the Wasserstein distance as it is well-defined even in the presence of support mismatch, although we highlight once again that estimating and/or minimizing this distance is difficult in practice.

Non-convergence of maximum-likelihood The manifold overfitting theorem shows that any smooth distribution P
† on M can be recovered through maximum-likelihood, even if it does not match P
∗. It does not, however, guarantee that *some* P
† will even be recovered. It is thus natural to ask whether it is possible to have a sequence of distributions achieving arbitrarily large likelihoods while not converging at all. The result below shows this to be true: in other words, training a D-dimensional model could result in maximum-likelihood not even converging.

Corollary 1: Let M ⊂ R
D be an analytic d-dimensional embedded submanifold of R
D with more than a single element, and *d < D*. Then, there exists a sequence of probability measures (Pt)∞
t=1 on R
D such that:

1. (Pt)∞
t=1 does not converge weakly.

2. For every t ≥ 1, Pt ≪ µD and Pt admits a density pt : R
D → R>0 with respect to µD such that:
(a) limt→∞
pt(x) = ∞ for every x ∈ M.

4For example, the flexible model p(x) = 1N
PN
n=1N (x; xn, σ2ID) with σ 2 → 0 recovers PˆN .

## (B) Limt→∞ Pt(X) = 0 For Every X /∈ Cl(M).

Proof: Let P
†1 and P
†2 be two different smooth probability measures on M, which exist since M has more than a single element. Let (P
1 t
)∞
t=1 and (P
2 t
)∞
t=1 be the corresponding sequences from Theorem 1. The sequence (Pt)∞
t=1, given by Pt = P
1 t if t is even and Pt = P
2 t otherwise, satisfies the above requirements.

## 4 Fixing Manifold Overfitting 4.1 The Two-Step Correctness Theorem

The previous section motivates the development of likelihood-based methods which work correctly even in the presence of dimensionality mismatch. Intuitively, fixing the mismatch should be enough, which suggests
(i) first reducing the dimension of the data to some d-dimensional representation, and then (ii) applying maximum-likelihood density estimation on the lower-dimensional dataset. The following theorem, where µd denotes the Lebesgue measure on R
d, confirms that this intuition is correct.

Theorem 2 (Two-Step Correctness): Let M ⊆ R
D be a C
1 d-dimensional embedded submanifold of R
D,
and let P
∗ be a distribution on M. Assume there exist measurable functions G : R
d → R
D and g : R
D → R
d such that G(g(x)) = x, P
∗-almost surely. Then:
1. G\#(g\#P
∗) = P
∗, where h\#P denotes the pushforward of measure P through the function h.

2. Moreover, if P
∗is smooth, and G and g are C
1, then:
(a) g\#P
∗ ≪ µd.

(b) G(g(x)) = x for every x ∈ M, and the functions g˜ : M → g(M) and G˜ : g(M) → M given by g˜(x) = g(x) and G˜(z) = G(z) are diffeomorphisms and inverses of each other.

Proof: See App. B.3.

We now discuss the implications of Theorem 2.

Assumptions and correctness The condition G(g(x)) = x, P
∗-almost surely, is what one should expect to obtain during the dimensionality reduction step, for example through an autoencoder (AE) (Rumelhart et al., 1985) where EX∼P∗[∥G(g(X)) − X∥
2 2
] is minimized over G and g, provided these have enough capacity and that population-level expectations can be minimized. We do highlight however that we allow for a much more general class of procedures than just autoencoders, nonetheless we still refer to g and G as the "encoder" and "decoder", respectively. Part 1, G\#(g\#P
∗) = P
∗, justifies using a first step where g reduces the dimension of the data, and then having a second step attempting to learn the low-dimensional distribution g\#P
∗: if a model PZ on R
d matches the encoded data distribution, i.e. PZ = g\#P
∗, it follows that G\#PZ = P
∗. In other words, matching the distribution of encoded data and then decoding recovers the target distribution.

Part 2a guarantees that maximum-likelihood can be used to learn g\#P
∗: note that if the model PZ is such that PZ ≪ µd with density (i.e. Radon-Nikodym derivative) pZ = dPZ/dµd, and g\#P
∗ ≪ µd, then both distributions are dominated by µd. Their KL divergence can then be expressed in terms of their densities:

$$\mathbb{KL}(g_{\#}\mathbb{P}^{*}||\mathbb{P}_{Z})=\int_{g(\mathcal{M})}p_{Z}^{*}\log{\frac{p_{Z}^{*}}{p_{Z}}}\mathrm{d}\mu_{d},$$

dµd, (1)
where p
∗
Z = dg\#P
∗/dµd is the density of the encoded ground truth distribution. Assuming that |Rg(M)
p
∗
Z
log p
∗
Zdµd| < ∞, the usual decomposition of KL divergence into expected log-likelihood and entropy applies, and it thus follows that maximum-likelihood over pZ is once again equivalent to minimizing KL(g\#P
∗||PZ) over PZ. In other words, learning the distribution of encoded data through maximum-likelihood with a flexible density approximator such as a VAE, AVB, NF, EBM, or ARM, and then decoding the result is a valid way of learning P
∗ which avoids manifold overfitting.

Density evaluation Part 2b of the two-step correctness theorem bears some resemblance to injective NFs. However, note that the theorem does not imply G is injective: it only implies its restriction to g(M), G|g(M)
, is injective (and similarly for g).

$$(1)$$

Fig. 3 exemplifies how this can happen even if g and G are not injective. As with injective NFs, the density pX of G\#PZ for a model PZ on g(M) is given by the injective change-of-variable formula:5

$$p_{X}(x)=p_{Z}(g(x))\left|\det J_{G}^{\top}(g(x))J_{G}(g(x))\right|^{-\frac{1}{2}},\tag{2}$$

for x ∈ M, where JG(g(x)) ∈ R
D×dis the Jacobian matrix of G evaluated at g(x). Practically, this observation enables density evaluation of a trained two-step model, for example for OOD detection.

Implementation-wise, we use the approach proposed by Caterini et al. (2021) in the context of injective NFs, which uses forward-mode automatic differentiation
(Baydin et al., 2018) to efficiently construct the Jacobian in (2). We highlight that, unlike Caterini et al.

(2021), we do not train our models through (2). Furthermore, injectivity is not enforced in G, but rather achieved at optimality of the encoder/decoder pair, and only on g(M).

Figure 3: Illustration of how g and G can biject between

![7_image_0.png](7_image_0.png)

M (spiral) and g(M) (line segment) while not being fully bijective between R
D and R
d.

## 4.2 Generalized Autoencoders

We now explain different approaches for obtaining G and g. As previously mentioned, a natural choice would be an AE minimizing EX∼P∗ [∥G(g(X)) − X∥
2 2
] over G and g. However, many other choices are also valid.

We call a *generalized autoencoder* (GAE) any procedure in which both (i) low-dimensional representations zn = g(xn) are recovered for n = 1*, . . . , N*, and (ii) a function G is learned with the intention that G(zn) = xn for n = 1*, . . . , N*.

As alternatives to an AE, some DGMs can be used as GAEs, either because they directly provide G and g or can be easily modified to do so. These methods alone might obtain a G which correctly maps to M,
but might not be correctly recovering P
∗. From the manifold overfitting theorem, this is what we should expect from likelihood-based models, and we argue it is not unreasonable to expect from other models as well.

For example, the high quality of samples generated from adversarial methods (Brock et al., 2019) suggests they are indeed learning M, but issues such as mode collapse (Che et al., 2017) suggest they might not be recovering P
∗(Arbel et al., 2021). Among other options (Wang et al., 2020), we can use the following explicit DGMs as GAEs: (i) VAEs or (ii) AVB, using the mean of the encoder as g and the mean of the decoder as G.

We can also use the following implicit DGMs as GAEs: (iii) Wasserstein autoencoders (WAEs) (Tolstikhin et al., 2018) or any of its follow-ups (Kolouri et al., 2018; Patrini et al., 2020), again using the decoder as G and the encoder as g, (iv) bidirectional GANs (BiGANs) (Donahue et al., 2017; Dumoulin et al., 2017),
taking G as the generator and g as the encoder, or (v) any GAN, by fixing G as the generator and then learning g by minimizing reconstruction error EX∼P∗ [∥G(g(X)) − X∥
2 2
].

Note that explicit construction of g can be avoided as long as the representations {zn}
N
n=1 are learned, which could be achieved through non-amortized models (Gershman & Goodman, 2014; Kim et al., 2018), or with optimization-based GAN inversion methods (Xia et al., 2021).

We summarize our two-step procedure class once again:
1. Learn G and {zn}
N
n=1 from {xn}
N
n=1 with a GAE.

2. Learn pZ from {zn}
N
n=1 with a likelihood-based DGM.

The final model is then given by pushing pZ forward through G. Any choice of GAE and likelihood-based DGM gives a valid instance of a two-step procedure. Note that G, and g if it is also explicitly constructed, are fixed throughout the second step.

5The density pX is with respect to the Riemannian measure on M corresponding to the Riemannian metric inherited from RD. This measure can be understood as the volume form on M in that integrating against them yields the same results.

## 5 Towards Unifying Deep Generative Models

Making implicit models explicit As noted above, some DGMs are themselves GAEs, including some implicit models for which density evaluation is not typically available, such as WAEs, BiGANs, and GANs. Ramesh & LeCun (2018) use (2) to train implicit models, but they do not train a second-step DGM and thus have no mechanism to encourage trained models to satisfy the change-of-variable formula. Dieng et al. (2019)
aim to provide GANs with density evaluation, but add D-dimensional Gaussian noise in order to achieve this, resulting in an adversarially-trained explicit model, rather than truly making an implicit model explicit. The two-step correctness theorem not only fixes manifold overfitting for explicit likelihood-based DGMs, but also enables density evaluation for these implicit models through (2) once a low-dimensional likelihood model has been trained on g(M). We highlight the relevance of training the second-step model pZ for (2) to hold:
even if G mapped some base distribution on R
d, e.g. a Gaussian, to P
∗, it need not be injective to achieve this, and could map distinct inputs to the same point on M (see Fig. 3). Such a G could be the result of training an implicit model, e.g. a GAN, which correctly learned its target distribution. Training g, and pZ on g(M) ⊆ R
d, is still required to ensure G|g(M)
is injective and (2) can be applied, even if the end result of this additional training is that the target distribution remains properly learned. Endowing implicit models with density evaluation addresses a significant downside of these models, and we show in Sec. 6.3 how this newfound capability can be used for OOD detection. Two-step procedures Several methods can be seen through the lens of our two-step approach, and can be interpreted as addressing manifold overfitting thanks to Theorem 2. Dai & Wipf (2019) use a two-step VAE,
where both the GAE and DGM are taken as VAEs. Xiao et al. (2019) use a standard AE along with an NF.

Brehmer & Cranmer (2020), and Kothari et al. (2021) use an AE as the GAE where G is an injective NF and g its left inverse and use an NF as the DGM. Ghosh et al. (2020) use an AE with added regularizers along with a Gaussian mixture model. Rombach et al. (2022) use a VAE along with a diffusion model (Ho et al.,
2020) and obtain highly competitive empirical performance, which is justified by our theoretical results.

Other methods, while not exact instances, are philosophically aligned. Razavi et al. (2019) first obtain discrete low-dimensional representations of observed data and then train an ARM on these, which is similar to a discrete version of our own approach. Arbel et al. (2021) propose a model which they show is equivalent to pushing forward a low-dimensional EBM through G. The design of this model fits squarely into our framework, although a different training procedure is used.

The methods of Zhang et al. (2020c), Caterini et al. (2021), and Ross & Cresswell (2021) simultaneously optimize G, g, and pZ rather than using a two-step approach, combining in their loss a reconstruction term with a likelihood term as in (2). The validity of these methods however is not guaranteed by the two-step correctness theorem, and we believe a theoretical understanding of their objectives to be an interesting direction for future work.

## 6 Experiments

We now experimentally validate the advantages of our proposed two-step procedures across a variety of settings. We use the nomenclature A+B to refer to the two-step model with A as its GAE and B as its DGM.

All experimental details are provided in App. C, including a brief summary of the losses of the individual models we consider. For all experiments on images, we set d = 20 as a hyperparameter,6 which we did not tune. We chose this value as it was close to the intrinsic dimension estimates obtained by Pope et al. (2021).

Our code7 provides baseline implementations of all our considered GAEs and DGMs, which we hope will be useful to the community even outside of our proposed two-step methodology.

6We slightly abuse notation when talking about d for a given model, since d here does not refer to the true intrinsic dimension anymore, but rather the dimension over which pZ is defined (and which G maps from and g maps to), which need not match the true and unknown intrinsic dimension.

7https://github.com/layer6ai-labs/two_step_zoo

![9_image_0.png](9_image_0.png)

Figure 4: Results on simulated data: von Mises ground truth **(left)**, EBM **(middle)**, and AE+EBM **(right)**.

## 6.1 Simulated Data

We consider a von Mises distribution on the unit circle in Fig. 4. We learn this distribution both with an EBM and a two-step AE+EBM model. While the EBM indeed concentrates mass around the circle, it assigns higher density to an incorrect region of it (the top, rather than the right), corroborating manifold overfitting.

The AE+EBM model not only learns the manifold more accurately, it also assigns higher likelihoods to the correct part of it. We show additional results on simulated data in App. D.1, where we visually confirm that the reason two-step models outperform single-step ones trained through maximum-likelihood is the data being supported on a low-dimensional manifold.

## 6.2 Comparisons Against Maximum-Likelihood

We now show that our two-step methods empirically outperform maximum-likelihood training. Conveniently, some likelihood-based DGMs recover lowdimensional representations and hence are GAEs too, providing the opportunity to compare two-step training and maximum-likelihood training directly.

In particular, AVB and VAEs both maximize a lower bound of the log-likelihood, so we can train a first model as a GAE, recover low-dimensional representations, and then train a second-step DGM. Any performance difference compared to maximum-likelihood is then due to the second-step DGM rather than the choice of GAE.

We show the results in Table 1 for MNIST, FMNIST
(Xiao et al., 2017), SVHN (Netzer et al., 2011), and CIFAR-10 (Krizhevsky, 2009). We use Gaussian decoders with learnable scalar variance for both models, even for MNIST and FMNIST, as opposed to Bernoulli or other common choices (Loaiza-Ganem
& Cunningham, 2019) in order to properly model the data as continuous and allow for manifold overfitting to happen. While ideally we would compare models based on log-likelihood, this is only sensible for models sharing the same dominating measure; here this is not the case as the single-step models are D-dimensional, while our two-step models are not. We thus use the FID score (Heusel et al., 2017) as a measure of how well models recover P
∗. Table 1 shows that our two-step procedures consistently outperform single-step maximum-likelihood training, even when adding Gaussian noise to the data, thus highlighting that manifold overfitting is still an empirical issue even Table 1: FID scores (lower is better). Means ± standard errors across 3 runs are shown. The superscript
"+" indicates a larger model, and the subscript "σ" indicates added Gaussian noise. Unreliable FID scores are highlighted in red (see text for description).

| MODEL   | MNIST        | FMNIST      | SVHN         | CIFAR-10    |
|---------|--------------|-------------|--------------|-------------|
| AVB     | 219.0 ± 4.2  | 235.9 ± 4.5 | 356.3 ± 10.2 | 289.0 ± 3.0 |
| AVB+    | 205.0 ± 3.9  | 216.2 ± 3.9 | 352.6 ± 7.6  | 297.1 ± 1.1 |
| AVB+ σ  | 205.2 ± 1.0  | 223.8 ± 5.4 | 353.0 ± 7.2  | 305.8 ± 8.7 |
| AVB+ARM | 86.4 ± 0.9   | 78.0 ± 0.9  | 56.6 ± 0.6   | 182.5 ± 1.0 |
| AVB+AVB | 133.3 ± 0.9  | 143.9 ± 2.5 | 74.5 ± 2.5   | 183.9 ± 1.7 |
| AVB+EBM | 96.6 ± 3.0   | 103.3 ± 1.4 | 61.5 ± 0.8   | 189.7 ± 1.8 |
| AVB+NF  | 83.5 ± 2.0   | 77.3 ± 1.1  | 55.4 ± 0.8   | 181.7 ± 0.8 |
| AVB+VAE | 106.2 ± 2.5  | 105.7 ± 0.6 | 59.9 ± 1.3   | 186.7 ± 0.9 |
| VAE     | 197.4 ± 1.5  | 188.9 ± 1.8 | 311.5 ± 6.9  | 270.3 ± 3.2 |
| VAE+    | 184.0 ± 0.7  | 179.1 ± 0.2 | 300.1 ± 2.1  | 257.8 ± 0.6 |
| VAE+ σ  | 185.9 ± 1.8  | 183.4 ± 0.7 | 302.2 ± 2.0  | 257.8 ± 1.7 |
| VAE+ARM | 69.7 ± 0.8   | 70.9 ± 1.0  | 52.9 ± 0.3   | 175.2 ± 1.3 |
| VAE+AVB | 117.1 ± 0.8  | 129.6 ± 3.1 | 64.0 ± 1.3   | 176.7 ± 2.0 |
| VAE+EBM | 74.1 ± 1.0   | 78.7 ± 2.2  | 63.7 ± 3.3   | 181.7 ± 2.8 |
| VAE+NF  | 70.3 ± 0.7   | 73.0 ± 0.3  | 52.9 ± 0.3   | 175.1 ± 0.9 |
| ARM+    | 98.7 ± 10.6  | 72.7 ± 2.1  | 168.3 ± 4.1  | 162.6 ± 2.2 |
| ARM+ σ  | 34.7 ± 3.1   | 23.1 ± 0.9  | 149.2 ± 10.7 | 136.1 ± 4.2 |
| AE+ARM  | 72.0 ± 1.3   | 76.0 ± 0.3  | 60.1 ± 3.0   | 186.9 ± 1.0 |
| EBM+    | 84.2 ± 4.3   | 135.6 ± 1.6 | 228.4 ± 5.0  | 201.4 ± 7.9 |
| EBM+ σ  | 101.0 ± 12.3 | 135.3 ± 0.9 | 235.0 ± 5.6  | 200.6 ± 4.8 |
| AE+EBM  | 75.4 ± 2.3   | 83.1 ± 1.9  | 75.2 ± 4.1   | 187.4 ± 3.7 |

when the ground truth distribution is D-dimensional but highly peaked around a manifold. We emphasize that we did not tune our two-step models, and thus the takeaway from Table 1 should not be about which combination of models is the best performing one, but rather how consistently two-step models outperform single-step models trained through maximum-likelihood. We also note that some of the baseline models are significantly larger, e.g. the VAE+ on MNIST has approximately 824k parameters, while the VAE model has 412k, and the VAE+EBM only 416k. The parameter efficiency of two-step models highlights that our empirical gains are not due to increasing model capacity but rather from addressing manifold overfitting. We show in App. C.4.3 a comprehensive list of parameter counts, along with an accompanying discussion.

Table 1 also shows comparisons between single and two-step models for ARMs and EBMs, which unlike AVB
and VAEs, are not GAEs themselves; we thus use an AE as the GAE for these comparisons. Although FID
scores did not consistently improve for these two-step models over their corresponding single-step baselines, we found the visual quality of samples was significantly better for almost all two-step models, as demonstrated in the first two columns of Fig. 5, and by the additional samples shown in App. D.2. We thus highlight with red the corresponding FID scores as unreliable in Table 1. We believe these failures modes of the FID
metric itself, wherein the scores do not correlate with visual quality, emphasize the importance of further research on sample-based scalar evaluation metrics for DGMs (Borji, 2022), although developing such metrics falls outside our scope. We also show comparisons using precision and recall (Kynkäänniemi et al., 2019) in App. D.4, and observe that two-step models still outperform single-step ones.

We also point out that one-step EBMs exhibited training difficulties consistent with maximum-likelihood non-convergence (App. D.3). Meanwhile, Langevin dynamics (Welling & Teh, 2011) for AE+EBM exhibit better and faster convergence, yielding good samples even when not initialized from the training buffer (see Fig. 13 in App. D.3), and AE+ARM speeds up sampling over the baseline ARM by a factor of O(D/d), in both cases because there are fewer coordinates in the sample space. All of the 44 two-step models shown in Table 1 visually outperformed their single-step counterparts (App. D.2), empirically corroborating our theoretical findings.

Finally, we have omitted some comparisons verified in prior work: Dai & Wipf (2019) show VAE+VAE
outperforms VAE, and Xiao et al. (2019) that AE+NF outperforms NF. We also include some preliminary experiments where we attempted to improve upon a GAN's generative performance on high resolution images in App. D.5. We used an optimization-based GAN inversion method, but found the reconstruction errors were too large to enable empirical improvements from adding a second-step model.

## 6.3 Ood Detection With Implicit Models

Having verified that, as predicted by Theorem 2, two-step models outperform maximum-likelihood training, we now turn our attention to the other consequence of this theorem, namely endowing implicit models with density evaluation after training a second-step DGM. We demonstrate that our approach advances fully-unsupervised likelihood-based out-of-distribution detection. Nalisnick et al. (2019) discovered the counter-intuitive phenomenon that likelihood-based DGMs sometimes assign higher likelihoods to OOD data than to in-distribution data. In particular, they found models trained on FMNIST and CIFAR-10 assigned higher likelihoods to MNIST and SVHN, respectively. While there has been a significant amount of research trying to remedy and explain this situation (Choi et al., 2018; Ren et al., 2019; Zisselman &
Tamar, 2020; Zhang et al., 2020a; Kirichenko et al., 2020; Le Lan & Dinh, 2020; Caterini & Loaiza-Ganem, 2021), there is little work achieving good OOD performance using only likelihoods of models trained in a fully-unsupervised way to recover P
∗rather than explicitly trained for OOD detection. Caterini et al. (2021)
achieve improvements in this regard, although their method remains computationally expensive and has issues scaling (e.g. no results are reported on the CIFAR-10 → SVHN task).

We train several two-step models where the GAE is either a BiGAN or a WAE, which do not by themselves allow for likelihood evaluation, and then use the resulting log-likelihoods (or lower bounds/negative energy functions) for OOD detection. Two-step models allow us to use either the high-dimensional log pX from (2)
or low-dimensional log pZ as metrics for this task. We conjecture that the latter is more reliable, since (i) the base measure is always µd, and (ii) the encoder-decoder is unlikely to exactly satisfy the conditions of Theorem 2. Hence, we use log pZ here, and show results for log pX in App. D.6.

![11_image_0.png](11_image_0.png)

Figure 5: Uncurated samples from single-step models (**first row**, showing ARM+
σ
, EBM+, AVB+
σ
, and VAE) and their respective two-step counterparts (**second row**, showing AE+ARM, AE+EBM, AVB+NF,
and VAE+AVB), for MNIST (**first column**), FMNIST (**second column**), SVHN (**third column**), and CIFAR-10 (**fourth column**).
Table 2 shows the (balanced) classification accuracy of a decision stump given only the log-likelihood; we show some corresponding histograms in App. D.6.

The stump is forced to assign large likelihoods as indistribution, so that accuracies below 50% indicate it incorrectly assigned higher likelihoods to OOD data.

We correct the classification accuracy to account for datasets of different size (details in App. D.6), resulting in an easily interpretable metric which can be understood as the expected classification accuracy if two same-sized samples of in-distribution and OOD
data were compared. Not only did we enable implicit models to perform OOD detection, but we also outperformed likelihood-based single-step models in this setting. To the best of our knowledge, no other model achieves nearly 50% (balanced) accuracy on CIFAR-10→SVHN using *only* likelihoods. Although admittedly the problem is not yet solved, we have certainly made progress on a challenging task for fully-unsupervised methods.

| in-distribution to OOD data. MODEL FMNIST → MNIST   | CIFAR-10 → SVHN   |            |
|-----------------------------------------------------|-------------------|------------|
| ARM+                                                | 9.9 ± 0.6         | 15.5 ± 0.0 |
| BiGAN+ARM                                           | 81.9 ± 1.4        | 38.0 ± 0.2 |
| WAE+ARM                                             | 69.8 ± 13.9       | 40.1 ± 0.2 |
| AVB+                                                | 96.0 ± 0.5        | 23.4 ± 0.1 |
| BiGAN+AVB                                           | 59.5 ± 3.1        | 36.4 ± 2.0 |
| WAE+AVB                                             | 90.7 ± 0.7        | 43.5 ± 1.9 |
| EBM+                                                | 32.5 ± 1.1        | 46.4 ± 3.1 |
| BiGAN+EBM                                           | 51.2 ± 0.2        | 48.8 ± 0.1 |
| WAE+EBM                                             | 57.2 ± 1.3        | 49.3 ± 0.2 |
| NF+                                                 | 36.4 ± 0.2        | 18.6 ± 0.3 |
| BiGAN+NF                                            | 84.2 ± 1.0        | 40.1 ± 0.2 |
| WAE+NF                                              | 95.4 ± 1.6        | 46.1 ± 1.0 |
| VAE+                                                | 96.1 ± 0.1        | 23.8 ± 0.2 |
| BiGAN+VAE                                           | 59.7 ± 0.2        | 38.1 ± 0.1 |
| WAE+VAE                                             | 92.5 ± 2.7        | 41.4 ± 0.2 |

For completeness, we show samples from these models in App. D.2 and FID scores in App. D.4. Implicit models see less improvement in FID from adding a second-step DGM than explicit models, suggesting that manifold overfitting is a less dire problem for implicit models. Nonetheless, we do observe some improvements, particularly for BiGANs, hinting that our two-step methodology not only endows these models with density evaluation, but that it can also improve their generative performance. We further show in App. D.6 that OOD improvements obtained by two-step models apply to explicit models as well. Interestingly, whereas the VAEs used in Nalisnick et al. (2019) have Bernoulli likelihoods, we find that our single-step likelihood-based Gaussian-decoder VAE and AVB models perform quite well on distinguishing FMNIST from MNIST, yet still fail on the CIFAR-10 task. Studying this is of future interest but is outside the scope of this work.

## 7 Conclusions, Scope, And Limitations

In this paper we diagnosed manifold overfitting, a fundamental problem of maximum-likelihood training with flexible densities when the data lives in a low-dimensional manifold. We proposed to fix manifold overfitting with a class of two-step procedures which remedy the issue, theoretically justify a large group of existing methods, and endow implicit models with density evaluation after training a low-dimensional likelihood-based DGM on encoded data.

Our two-step correctness theorem remains nonetheless a nonparametric result. In practice, the reconstruction error will be positive, i.e. EX∼P∗ [∥G(g(X)) − X∥
2 2
] > 0. Note that this can happen even when assuming infinite capacity, as M needs to be diffeomorphic to g(M) for some C
1function g : R
D → R
dfor the reconstruction error to be 0. We leave a study of learnable topologies of M for future work. The density in
(2) might then not be valid, either if the reconstruction error is positive, or if pZ assigns positive probability outside of g(M). However, we note that our approach at least provides a mechanism to encourage our trained encoder-decoder pair to invert each other, suggesting that (2) might not be too far off. We also believe that a finite-sample extension of our result, while challenging, would be a relevant direction for future work. We hope our work will encourage follow-up research exploring different ways of addressing manifold overfitting, or its interaction with the score-matching objective.

Finally, we treated d as a hyperparameter, but in practice d is unknown and improvements can likely be had by estimating it (Levina & Bickel, 2004), as overspecifying it should not fully remove manifold overfitting, and underspecifying it would make learning M mathematically impossible. Still, we observed significant empirical improvements across a variety of tasks and datasets, demonstrating that manifold overfitting is not just a theoretical issue in DGMs, and that two-step methods are an important class of procedures to deal with it.

## Broader Impact Statement

Generative modelling has numerous applications besides image generation, including but not limited to: audio generation (van den Oord et al., 2016a; Engel et al., 2017), biology (Lopez et al., 2020), chemistry
(Gómez-Bombarelli et al., 2018), compression (Townsend et al., 2019; Ho et al., 2019; Golinski & Caterini, 2021; Yang et al., 2022), genetics (Riesselman et al., 2018), neuroscience (Sussillo et al., 2016; Gao et al.,
2016; Loaiza-Ganem et al., 2019), physics (Otten et al., 2021; Padmanabha & Zabaras, 2021), text generation
(Bowman et al., 2016; Devlin et al., 2019; Brown et al., 2020), text-to-image generation (Zhang et al., 2017; Ramesh et al., 2022; Saharia et al., 2022), video generation (Vondrick et al., 2016; Weissenborn et al., 2020),
and weather forecasting (Ravuri et al., 2021). While each of these applications can have positive impacts on society, it is also possible to apply deep generative models inappropriately, or create negative societal impacts through their use (Brundage et al., 2018; Urbina et al., 2022). When datasets are biased, accurate generative models will inherit those biases (Steed & Caliskan, 2021; Humayun et al., 2022). Inaccurate generative models may introduce new biases not reflected in the data. Our paper addresses a ubiquitous problem in generative modelling with maximum likelihood estimation - manifold overfitting - that causes models to fail to learn the distribution of data correctly. In this sense, correcting manifold overfitting should lead to more accurate generative models, and representations that more closely reflect the data.

## Acknowledgments

We thank the anonymous reviewers whose suggestions helped improved our work. In particular, we thank anonymous reviewer Cev4, as well as Taiga Abe, both of whom pointed out the mixture of two Gaussians regular overfitting example from Bishop (2006), which was lacking from a previous version of our manuscript.

We wrote our code in Python (Van Rossum & Drake, 2009), and specifically relied on the following packages:
Matplotlib (Hunter, 2007), TensorFlow (Abadi et al., 2015) (particularly for TensorBoard), Jupyter Notebook
(Kluyver et al., 2016), PyTorch (Paszke et al., 2019), nflows (Durkan et al., 2020), NumPy (Harris et al.,
2020), prdc (Naeem et al., 2020), pytorch-fid (Seitzer, 2020), and functorch (He & Zou, 2021).

## References

Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL https://www.

tensorflow.org/. Software available from tensorflow.org.

Guillaume Alain and Yoshua Bengio. What regularized auto-encoders learn from the data-generating distribution. *The Journal of Machine Learning Research*, 15(1):3563–3593, 2014.

Michael Arbel, Liang Zhou, and Arthur Gretton. Generalized energy based models. *ICLR*, 2021.

Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In International conference on machine learning, pp. 214–223. PMLR, 2017.

Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization and equilibrium in generative adversarial nets (gans). In *International Conference on Machine Learning*, pp. 224–232. PMLR,
2017.

Shane Barratt and Rishi Sharma. A note on the inception score. *arXiv preprint arXiv:1801.01973*, 2018. Atılım Günes Baydin, Barak A Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark Siskind. Automatic differentiation in machine learning: a survey. *Journal of Machine Learning Research*, 18:1–43, 2018.

Jens Behrmann, Will Grathwohl, Ricky TQ Chen, David Duvenaud, and Jörn-Henrik Jacobsen. Invertible residual networks. In *International Conference on Machine Learning*, pp. 573–582. PMLR, 2019.

Jens Behrmann, Paul Vicol, Kuan-Chieh Wang, Roger Grosse, and Jörn-Henrik Jacobsen. Understanding and mitigating exploding inverses in invertible neural networks. In International Conference on Artificial Intelligence and Statistics, pp. 1792–1800. PMLR, 2021.

Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives.

IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013.

Patrick Billingsley. *Probability and measure*. John Wiley & Sons, 2008.

Christopher M. Bishop. *Pattern Recognition and Machine Learning*. Springer: New York, 2006. S Bond-Taylor, A Leach, Y Long, and CG Willcocks. Deep generative modelling: A comparative review of VAEs, GANs, normalizing flows, energy-based and autoregressive models. *IEEE Transactions on Pattern* Analysis and Machine Intelligence, 2021.

Ali Borji. Pros and cons of gan evaluation measures: New developments. *Computer Vision and Image* Understanding, 215:103329, 2022.

Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. In *20th SIGNLL Conference on Computational Natural Language* Learning, CoNLL 2016, pp. 10–21. Association for Computational Linguistics (ACL), 2016.

Johann Brehmer and Kyle Cranmer. Flows for simultaneous manifold learning and density estimation. In Advances in Neural Information Processing Systems, volume 33, 2020.

Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. *ICLR*, 2019.

Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners.

Advances in neural information processing systems, 33:1877–1901, 2020.

Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, et al. The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. *arXiv preprint arXiv:1802.07228*, 2018.

Theophilos Cacoullos. Estimation of a multivariate density. *Annals of the Institute of Statistical Mathematics*,
18(1):179–189, 1966.

Anthony L Caterini and Gabriel Loaiza-Ganem. Entropic Issues in Likelihood-Based OOD Detection. *arXiv* preprint arXiv:2109.10794, 2021.

Anthony L Caterini, Gabriel Loaiza-Ganem, Geoff Pleiss, and John P Cunningham. Rectangular flows for manifold learning. In *Advances in Neural Information Processing Systems*, volume 34, 2021.

Minwoo Chae, Dongha Kim, Yongdai Kim, and Lizhen Lin. A likelihood approach to nonparametric estimation of a singular distribution using deep generative models. *arXiv preprint arXiv:2105.04046*, 2021.

Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wenjie Li. Mode regularized generative adversarial networks. *ICLR*, 2017.

Ricky T. Q. Chen, Jens Behrmann, David K Duvenaud, and Joern-Henrik Jacobsen. Residual flows for invertible generative modeling. In *Advances in Neural Information Processing Systems*, volume 32, 2019.

Hyunsun Choi, Eric Jang, and Alexander A Alemi. WAIC, but why? Generative ensembles for robust anomaly detection. *arXiv preprint arXiv:1810.01392*, 2018.

Casey Chu, Kentaro Minami, and Kenji Fukumizu. Smoothness and stability in GANs. *ICLR*, 2020.

Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). *ICLR*, 2016.

Rob Cornish, Anthony Caterini, George Deligiannidis, and Arnaud Doucet. Relaxing bijectivity constraints with continuously indexed normalising flows. In *International Conference on Machine Learning*, pp.

2133–2143. PMLR, 2020.

Edmond Cunningham and Madalina Fiterau. A change of variables method for rectangular matrix-vector products. In *Proceedings of The 24th International Conference on Artificial Intelligence and Statistics*,
volume 130. PMLR, 2021.

Bin Dai and David Wipf. Diagnosing and enhancing VAE models. *ICLR*, 2019.

Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of the 2019 Conference of the North American* Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, 2019.

Adji B Dieng, Francisco JR Ruiz, David M Blei, and Michalis K Titsias. Prescribed generative adversarial networks. *arXiv preprint arXiv:1910.04302*, 2019.

Jean Dieudonné. *Treatise on Analysis: Volume III*. Academic Press, 1973. Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using Real NVP. *ICLR*, 2017.

Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. *ICLR*, 2017.

Yilun Du and Igor Mordatch. Implicit generation and modeling with energy based models. *Advances in* Neural Information Processing Systems, 32:3608–3618, 2019.

Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, and Aaron Courville. Adversarially learned inference. *ICLR*, 2017.

Conor Durkan, Artur Bekasov, Iain Murray, and George Papamakarios. Neural Spline Flows. In Advances in Neural Information Processing Systems, volume 32, 2019.

Conor Durkan, Artur Bekasov, Iain Murray, and George Papamakarios. nflows: normalizing flows in PyTorch, November 2020. URL https://doi.org/10.5281/zenodo.4296287.

Jesse Engel, Cinjon Resnick, Adam Roberts, Sander Dieleman, Mohammad Norouzi, Douglas Eck, and Karen Simonyan. Neural audio synthesis of musical notes with wavenet autoencoders. In *International Conference* on Machine Learning, pp. 1068–1077. PMLR, 2017.

Yuanjun Gao, Evan W Archer, Liam Paninski, and John P Cunningham. Linear dynamical neural population models through nonlinear embeddings. *Advances in neural information processing systems*, 29, 2016.

Mevlana C Gemici, Danilo Rezende, and Shakir Mohamed. Normalizing flows on riemannian manifolds.

arXiv preprint arXiv:1611.02304, 2016.

Samuel Gershman and Noah Goodman. Amortized inference in probabilistic reasoning. In Proceedings of the annual meeting of the cognitive science society, volume 36, 2014.

Partha Ghosh, Mehdi SM Sajjadi, Antonio Vergari, Michael Black, and Bernhard Schölkopf. From variational to deterministic autoencoders. *ICLR*, 2020.

Adam Golinski and Anthony L Caterini. Lossless compression using continuously-indexed normalizing flows.

In *Neural Compression: From Information Theory to Applications–Workshop@ ICLR 2021*, 2021.

Rafael Gómez-Bombarelli, Jennifer N Wei, David Duvenaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D Hirzel, Ryan P Adams, and Alán Aspuru-Guzik. Automatic chemical design using a data-driven continuous representation of molecules.

ACS central science, 4(2):268–276, 2018.

Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. *Advances in neural information processing* systems, 27, 2014.

Alfred Gray. The volume of a small geodesic ball of a riemannian manifold. The Michigan Mathematical Journal, 20(4):329–344, 1974.

Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. *Journal of Machine Learning Research*, 13:723–773, 2012.

Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. In *Advances in Neural Information Processing Systems*, volume 30, 2017.

Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fernández del Río, Mark Wiebe, Pearu Peterson, Pierre Gérard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E. Oliphant. Array programming with NumPy. *Nature*,
585(7825):357–362, September 2020. doi: 10.1038/s41586-020-2649-2. URL https://doi.org/10.1038/ s41586-020-2649-2.

Horace He and Richard Zou. functorch: Jax-like composable function transforms for pytorch. https:
//github.com/pytorch/functorch, 2021.

Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, volume 30, 2017.

Jonathan Ho, Evan Lohn, and Pieter Abbeel. Compression with flows via local bits-back coding. *Advances in* Neural Information Processing Systems, 32, 2019.

Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems, volume 33, pp. 6840–6851, 2020.

Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural computation*, 9(8):1735–1780, 1997.

Kurt Hornik. Approximation capabilities of multilayer feedforward networks. *Neural networks*, 4(2):251–257, 1991.

Christian Horvat and Jean-Pascal Pfister. Denoising normalizing flow. *Advances in Neural Information* Processing Systems, 34, 2021a.

Christian Horvat and Jean-Pascal Pfister. Density estimation on low-dimensional manifolds: an inflationdeflation approach. *arXiv preprint arXiv:2105.12152*, 2021b.

Minyoung Huh, Richard Zhang, Jun-Yan Zhu, Sylvain Paris, and Aaron Hertzmann. Transforming and projecting images into class-conditional generative networks. In *European Conference on Computer Vision*,
pp. 17–34. Springer, 2020.

Ahmed Imtiaz Humayun, Randall Balestriero, and Richard Baraniuk. Magnet: Uniform sampling from deep generative network manifolds without retraining. *ICLR*, 2022.

J. D. Hunter. Matplotlib: A 2d graphics environment. *Computing in Science & Engineering*, 9(3):90–95, 2007. doi: 10.1109/MCSE.2007.55.

Aapo Hyvärinen. Estimation of non-normalized statistical models by score matching. *Journal of Machine* Learning Research, 6(4), 2005.

Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp.

4401–4410, 2019.

Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Training generative adversarial networks with limited data. In *Advances in Neural Information Processing Systems*,
2020a.

Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of StyleGAN. In *Proc. CVPR*, 2020b.

Yoon Kim, Sam Wiseman, Andrew Miller, David Sontag, and Alexander Rush. Semi-amortized variational autoencoders. In *International Conference on Machine Learning*, pp. 2678–2687. PMLR, 2018.

Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *ICLR*, 2015.

Diederik P Kingma and Prafulla Dhariwal. Glow: Generative Flow with Invertible 1× 1 Convolutions. In Advances in Neural Information Processing Systems, volume 31, 2018.

Diederik P Kingma and Max Welling. Auto-encoding Variational Bayes. *ICLR*, 2014.

Diederik P Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational diffusion models. In Advances in neural information processing systems, volume 34, 2021.

Polina Kirichenko, Pavel Izmailov, and Andrew G Wilson. Why normalizing flows fail to detect out-ofdistribution data. *Advances in neural information processing systems*, 33:20578–20589, 2020.

Thomas Kluyver, Benjamin Ragan-Kelley, Fernando Pérez, Brian Granger, Matthias Bussonnier, Jonathan Frederic, Kyle Kelley, Jessica Hamrick, Jason Grout, Sylvain Corlay, Paul Ivanov, Damián Avila, Safia Abdalla, and Carol Willing. Jupyter notebooks - a publishing format for reproducible computational workflows. In F. Loizides and B. Schmidt (eds.), Positioning and Power in Academic Publishing: Players, Agents and Agendas, pp. 87 - 90. IOS Press, 2016.

Frederic Koehler, Viraj Mehta, and Andrej Risteski. Representational aspects of depth and conditioning in normalizing flows. In *International Conference on Machine Learning*, pp. 5628–5636. PMLR, 2021.

Soheil Kolouri, Phillip E Pope, Charles E Martin, and Gustavo K Rohde. Sliced wasserstein auto-encoders.

ICLR, 2018.

Konik Kothari, AmirEhsan Khorashadizadeh, Maarten de Hoop, and Ivan Dokmanić. Trumpets: Injective flows for inference and inverse problems. In *Proceedings of the Thirty-Seventh Conference on Uncertainty* in Artificial Intelligence, volume 161, pp. 1269–1278, 2021.

Alex Krizhevsky. Learning multiple layers of features from tiny images. *Master's thesis, University of Toronto*,
2009.

Tuomas Kynkäänniemi, Tero Karras, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Improved precision and recall metric for assessing generative models. *Advances in Neural Information Processing Systems*, 32, 2019.

Charline Le Lan and Laurent Dinh. Perfect density models cannot guarantee anomaly detection. In *"I Can't* Believe It's Not Better!"NeurIPS 2020 workshop, 2020.

Y LeCun. The MNIST database of handwritten digits, 1998. *URL http://yann.lecun.com/exdb/mnist*, 1998.

John M Lee. Smooth manifolds. In *Introduction to Smooth Manifolds*, pp. 1–31. Springer, 2013.

John M Lee. *Introduction to Riemannian manifolds*. Springer, 2018. Erich L Lehmann and George Casella. *Theory of point estimation*. Springer Science & Business Media, 2006.

Elizaveta Levina and Peter Bickel. Maximum likelihood estimation of intrinsic dimension. Advances in neural information processing systems, 17, 2004.

Gabriel Loaiza-Ganem and John P Cunningham. The continuous bernoulli: fixing a pervasive error in variational autoencoders. *Advances in Neural Information Processing Systems*, 32:13287–13297, 2019.

Gabriel Loaiza-Ganem, Sean Perkins, Karen Schroeder, Mark Churchland, and John P Cunningham. Deep random splines for point process intensity estimation of neural population data. *Advances in Neural* Information Processing Systems, 32, 2019.

Romain Lopez, Adam Gayoso, and Nir Yosef. Enhancing scientific discoveries in molecular biology with deep generative models. *Molecular Systems Biology*, 16(9):e9198, 2020.

Emile Mathieu and Maximilian Nickel. Riemannian continuous normalizing flows. In *Advances in Neural* Information Processing Systems, volume 33, 2020.

Pierre-Alexandre Mattei and Jes Frellsen. Leveraging the exact likelihood of deep latent variable models.

Advances in Neural Information Processing Systems, 31, 2018.

Chenlin Meng, Jiaming Song, Yang Song, Shengjia Zhao, and Stefano Ermon. Improved autoregressive modeling with distribution smoothing. *ICLR*, 2021.

Lars Mescheder, Sebastian Nowozin, and Andreas Geiger. Adversarial Variational Bayes: Unifying variational autoencoders and generative adversarial networks. In *International Conference on Machine Learning*, pp.

2391–2400. PMLR, 2017.

Shakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. *arXiv preprint* arXiv:1610.03483, 2016.

Muhammad Ferjad Naeem, Seong Joon Oh, Youngjung Uh, Yunjey Choi, and Jaejun Yoo. Reliable fidelity and diversity metrics for generative models. In *International Conference on Machine Learning*, pp. 7176–7185.

PMLR, 2020.

Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Do deep generative models know what they don't know? *ICLR*, 2019.

Hariharan Narayanan and Sanjoy Mitter. Sample complexity of testing the manifold hypothesis. *Advances in* neural information processing systems, 23, 2010.

Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y. Ng. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011, 2011. URL http://ufldl.stanford.edu/housenumbers/nips2011_
housenumbers.pdf.

Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. F-GAN: Training generative neural samplers using variational divergence minimization. In Proceedings of the 30th International Conference on Neural Information Processing Systems, pp. 271–279, 2016.

Sydney Otten, Sascha Caron, Wieske de Swart, Melissa van Beekveld, Luc Hendriks, Caspar van Leeuwen, Damian Podareanu, Roberto Ruiz de Austri, and Rob Verheyen. Event generation and statistical sampling for physics with deep generative models and a density information buffer. *Nature communications*, 12(1):
1–16, 2021.

Arkadas Ozakin and Alexander Gray. Submanifold density estimation. *Advances in Neural Information* Processing Systems, 22, 2009.

Govinda Anantha Padmanabha and Nicholas Zabaras. Solving inverse problems using conditional invertible neural networks. *Journal of Computational Physics*, 433:110194, 2021.

Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. *Advances in neural information processing systems*, 32:8026–8037, 2019.

Giorgio Patrini, Rianne van den Berg, Patrick Forre, Marcello Carioni, Samarth Bhargav, Max Welling, Tim Genewein, and Frank Nielsen. Sinkhorn autoencoders. In *Uncertainty in Artificial Intelligence*, pp. 733–743. PMLR, 2020.

Xavier Pennec. Intrinsic statistics on riemannian manifolds: Basic tools for geometric measurements. *Journal* of Mathematical Imaging and Vision, 25(1):127–154, 2006.

Phillip Pope, Chen Zhu, Ahmed Abdelkader, Micah Goldblum, and Tom Goldstein. The intrinsic dimension of images and its impact on learning. *ICLR*, 2021.

Prajit Ramachandran, Barret Zoph, and Quoc V Le. Searching for activation functions. *arXiv preprint* arXiv:1710.05941, 2017.

Aditya Ramesh and Yann LeCun. Backpropagation for implicit spectral densities. *arXiv preprint* arXiv:1806.00499, 2018.

Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. *arXiv preprint arXiv:2204.06125*, 2022.

Suman Ravuri, Karel Lenc, Matthew Willson, Dmitry Kangin, Remi Lam, Piotr Mirowski, Megan Fitzsimons, Maria Athanassiadou, Sheleem Kashem, Sam Madge, et al. Skilful precipitation nowcasting using deep generative models of radar. *Nature*, 597(7878):672–677, 2021.

Ali Razavi, Aaron van den Oord, and Oriol Vinyals. Generating diverse high-fidelity images with VQ-VAE-2.

In *Advances in neural information processing systems*, pp. 14866–14876, 2019.

Jie Ren, Peter J Liu, Emily Fertig, Jasper Snoek, Ryan Poplin, Mark Depristo, Joshua Dillon, and Balaji Lakshminarayanan. Likelihood ratios for out-of-distribution detection. In *Advances in Neural Information* Processing Systems, volume 32, pp. 14707–14718, 2019.

Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In *International conference on machine learning*, pp. 1278–1286.

PMLR, 2014.

Danilo Jimenez Rezende, George Papamakarios, Sébastien Racaniere, Michael Albergo, Gurtej Kanwar, Phiala Shanahan, and Kyle Cranmer. Normalizing flows on tori and spheres. In International Conference on Machine Learning, pp. 8083–8092. PMLR, 2020.

Adam J Riesselman, John B Ingraham, and Debora S Marks. Deep generative models of genetic variation capture the effects of mutations. *Nature methods*, 15(10):816–822, 2018.

Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684–10695, 2022.

Brendan Leigh Ross and Jesse C Cresswell. Tractable density estimation on learned manifolds with conformal embedding flows. In *Advances in Neural Information Processing Systems*, volume 34, 2021.

David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning internal representations by error propagation. Technical report, California Univ San Diego La Jolla Inst for Cognitive Science, 1985.

Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-toimage diffusion models with deep language understanding. *arXiv preprint arXiv:2205.11487*, 2022.

Mehdi SM Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, and Sylvain Gelly. Assessing generative models via precision and recall. *Advances in Neural Information Processing Systems*, 31, 2018.

Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. *Advances in neural information processing systems*, 29, 2016.

Maximilian Seitzer. pytorch-fid: FID Score for PyTorch. https://github.com/mseitzer/pytorch-fid, August 2020. Version 0.2.1.

Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. In Proceedings of the 33rd Annual Conference on Neural Information Processing Systems, 2019.

Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole.

Score-based generative modeling through stochastic differential equations. In *ICLR*, 2021.

Ryan Steed and Aylin Caliskan. Image representations learned with unsupervised pre-training contain human-like biases. In *Proceedings of the 2021 ACM conference on fairness, accountability, and transparency*,
pp. 701–713, 2021.

David Sussillo, Rafal Jozefowicz, LF Abbott, and Chethan Pandarinath. Lfads-latent factor analysis via dynamical systems. *arXiv preprint arXiv:1608.06315*, 2016.

Lucas Theis and Matthias Bethge. Generative image modeling using spatial LSTMs. *Advances in Neural* Information Processing Systems, 28:1927–1935, 2015.

Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. Wasserstein auto-encoders. *ICLR*,
2018.

James Townsend, Tom Bird, and David Barber. Practical lossless compression with latent variables using bits back coding. *ICLR*, 2019.

Fabio Urbina, Filippa Lentzos, Cédric Invernizzi, and Sean Ekins. Dual use of artificial-intelligence-powered drug discovery. *Nature Machine Intelligence*, 4(3):189–191, 2022.

Benigno Uria, Iain Murray, and Hugo Larochelle. Rnade: The real-valued neural autoregressive densityestimator. In *In Advances in Neural Information Processing Systems 26 (NIPS 26)*, 2013.

Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016a.

Aäron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In Proceedings of the 33rd International Conference on International Conference on Machine Learning-Volume 48, pp. 1747–1756, 2016b.

Guido Van Rossum and Fred L. Drake. *Python 3 Reference Manual*. CreateSpace, Scotts Valley, CA, 2009.

ISBN 1441412697.

Pascal Vincent. A connection between score matching and denoising autoencoders. *Neural computation*, 23
(7):1661–1674, 2011.

Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In *Proceedings of the 25th international conference on Machine* learning, pp. 1096–1103, 2008.

Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics. *Advances* in neural information processing systems, 29, 2016.

Wei Wang, Yan Huang, Yizhou Wang, and Liang Wang. Generalized autoencoder: A neural network framework for dimensionality reduction. In *Proceedings of the IEEE conference on computer vision and* pattern recognition workshops, pp. 490–497, 2014.

Yingfan Wang, Haiyang Huang, Cynthia Rudin, and Yaron Shaposhnik. Understanding how dimension reduction tools work: An empirical approach to deciphering t-SNE, UMAP, TriMAP, and PaCMAP for data visualization. *arXiv preprint arXiv:2012.04456*, 2020.

Dirk Weissenborn, Oscar Täckström, and Jakob Uszkoreit. Scaling autoregressive video models. *ICLR*, 2020. Max Welling and Yee W Teh. Bayesian learning via stochastic gradient langevin dynamics. In *Proceedings of* the 28th international conference on machine learning (ICML-11), pp. 681–688. Citeseer, 2011.

Weihao Xia, Yulun Zhang, Yujiu Yang, Jing-Hao Xue, Bolei Zhou, and Ming-Hsuan Yang. GAN inversion: A
survey. *arXiv preprint arXiv:2101.05278*, 2021.

Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. *arXiv preprint arXiv:1708.07747*, 2017.

Zhisheng Xiao, Qing Yan, and Yali Amit. Generative latent flow. *arXiv preprint arXiv:1905.10485*, 2019. Yibo Yang, Stephan Mandt, and Lucas Theis. An introduction to neural data compression. *arXiv preprint* arXiv:2202.06533, 2022.

Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N
Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks.

In *Proceedings of the IEEE international conference on computer vision*, pp. 5907–5915, 2017.

Hongjie Zhang, Ang Li, Jie Guo, and Yanwen Guo. Hybrid models for open set recognition. In European Conference on Computer Vision, pp. 102–117. Springer, 2020a.

Mingtian Zhang, Peter Hayes, Thomas Bird, Raza Habib, and David Barber. Spread divergence. In International Conference on Machine Learning, pp. 11106–11116. PMLR, 2020b.

Zijun Zhang, Ruixiang Zhang, Zongpeng Li, Yoshua Bengio, and Liam Paull. Perceptual generative autoencoders. In *International Conference on Machine Learning*, pp. 11298–11306. PMLR, 2020c.

Ev Zisselman and Aviv Tamar. Deep residual flow for out of distribution detection. In *Proceedings of the* IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13994–14003, 2020.

## A Informal Measure Theory Primer

Before stating Theorems 1 and 2, and studying their implications, we provide a brief tutorial on some aspects of measure theory that are relevant to follow our discussion. This review is not meant to be comprehensive, and we prioritize intuition over formalism. Readers interested in the topic may consult textbooks such as Billingsley (2008).

## A.1 Probability Measures

Let us first motivate the need for measure theory in the first place and consider the question: *what is a* density? Intuitively, the density pX of a random variable X is a function having the property that integrating pX over any set A gives back the probability that X ∈ A. This density characterizes the distribution of X, in that it can be used to answer any probabilistic question about X. It is common knowledge that discrete random variables are not specified through a density, but rather a probability mass function. Similarly, in our setting, where X might always take values in M, such a density will not exist. To see this, consider the case where A = M, so that the integral of pX over M would have to be 1, which cannot happen since M
has volume 0 in R
D (or more formally, Lebesgue measure 0). Measure theory provides the tools necessary to properly specify *any distribution*, subsuming as special cases probability mass functions, densities of continuous random variables, and distributions on manifolds.

A measure µ on R
D is a function mapping subsets A ⊆ R
D to R≥0, obeying the following properties: (i)
µ(A) ≥ 0 for every A, (ii) µ(∅) = 0, where ∅ denotes the empty set, and (iii) µ(∪∞
k=1Ak) = P∞
k=1 µ(Ak) for any sequence of pairwise disjoint sets A1, A2*, . . .* (i.e. Ai ∩ Aj = ∅ whenever i ̸= j). Note that most measures of interest are only defined over a large class of subsets of R
D (called σ-algebras, the most notable one being the Borel σ-algebra) rather than for every possible subset due to technical reasons, but we omit details in the interest of better conveying intuition. A measure is called a *probability* measure if it also satisfies µ(R
D) = 1.

To any random variable X corresponds a probability measure µX, having the property that µX(A) is the probability that X ∈ A for any A. Analogously to probability mass functions or densities of continuous random variables, µX allows us to answer any probabilistic question about X. The probability measure µX
is often called the distribution or law of X. Throughout our paper, P
∗is the distribution from which we observe data.

Let us consider two examples to show how probability mass functions and densities of continuous random variables are really just specifying distributions. Given a1*, . . . , a*K ∈ R
D, consider the probability mass function of a random variable X given by pX(x) = 1/K for x = a1, a2*, . . . , a*K and 0 otherwise. This probability mass function is simply specifying the distribution µX(A) = 1/K ·PK
k=1 1(ak ∈ A), where 1(· ∈ A) denotes the indicator function for A, i.e. 1(a ∈ A) is 1 if a ∈ A, and 0 otherwise. Now consider a standard Gaussian random variable X in R
D with density pX(x) = N (x; 0, ID). Similarly to how the probability mass function from the previous example characterized a distribution, this density does so as well through µX(A) = RA N (x; 0, ID)dx. We will see in the next section how these ideas can be extended to distributions on manifolds.

The concept of integrating a function h : R
D → R with respect to a measure µ on R
D is fundamental in measure theory, and can be thought of as "weighting the inputs of h according to µ". In the case of the Lebesgue measure µD (which assigns to subsets A of R
D their "volume" µD(A)), integration extends the concept of Riemann integrals commonly taught in calculus courses, and in the case of random variables integration defines expectations, i.e. EX∼µX [h(X)] = RhdµX. In the next section we will talk about the interplay between integration and densities.

We finish this section by explaining the relevant concept of a property holding almost surely with respect to a measure µ. A property is said to hold µ-*almost surely* if the set A over which it does not hold is such that µ(A) = 0. For example, if µX is the distribution of a standard Gaussian random variable X in R
D, then we can say that X = 0 ̸ holds µX-almost surely, since µX({0}) = 0. The assumption that G(g(x)) = x, P
∗-almost surely in Theorem 2 thus means that P
∗({x ∈ R
D : G(g(x)) ̸= x}) = 0.

## A.2 Absolute Continuity

So far we have seen that probability measures allow us to talk about distributions in full generality, and that probability mass functions and densities of continuous random variables can be used to specify probability measures. A distribution on a manifold M embedded in R
D can simply be thought of as a probability measure µ such that µ(M) = 1. We would like to define densities on manifolds in an analogous way to probability mass functions and densities of continuous random variables, in such a way that they allow us to characterize distributions on the manifold. Absolute continuity of measures is a concept that allows us to formalize the concept of *density with respect to a dominating measure*, and encompasses probability mass functions, densities of continuous random variables, and also allows us to define densities on manifolds. We will see that our intuitive definition of a density as a function which, when integrated over a set gives back its probability, is in fact correct, just as long as we specify the measure we integrate with respect to.

Given two measures µ and ν, we say that µ is *absolutely continuous* with respect to ν if for every A such that ν(A) = 0, it also holds that µ(A) = 0. If µ is absolutely continuous with respect to ν, we also say that ν *dominates* µ, and denote this property as µ ≪ ν. The Radon-Nikodym theorem states that, under some mild assumptions on µ and ν which hold for all the measures considered in this paper, µ ≪ ν implies the existence of a function h such that µ(A) = RA
hdν for every A. This result provides the means to formally define densities: h is called the density or *Radon-Nikodym derivative* of µ with respect to ν, and is often written as dµ/dν.

Before explaining how this machinery allows us to talk about densities on manifolds, we first continue our examples to show that probability mass functions and densities of continuous random variables are Radon-Nikodym derivatives with respect to appropriate measures. Let us reconsider the example where pX(x) = 1/K for x = a1, a2*, . . . , a*K and 0 otherwise, and µX(A) = 1/K ·PK
k=1 1(ak ∈ A). Consider the measure ν(A) = PK
k=1 1(ak ∈ A), which essentially just counts the number of aks in A. Clearly µX ≪ ν, and so it follows that µX admits a density with respect to ν. This density turns out to be pX, since µX(A) = RA
pXdν. In other words, the probability mass function pX can be thought of as a Radon-Nikodym derivative, i.e. pX = dµX/dν. Let us now go back to the continuous density example where pX(x) = N (x; 0, ID) and µX is given by the Riemann integral µX(A) = RA N (x; 0, ID)dx. In this case, ν = µD, and since the Lebesgue integral extends the Riemann integral, it follows that µX(A) = RA
pXdµD,
so that the density pX is actually also a density in the formal sense of being a Radon-Nikodym derivative, so that pX = dµX/dµD. We can thus see that the formal concept of density or Radon-Nikodym derivative generalizes both probability mass functions and densities of continuous random variables as we usually think of them, allowing to specify distributions in a general way.

The concept of Radon-Nikodym derivative also allows us to obtain densities on manifolds, the only missing ingredient being a dominating measure on the manifold. Riemannian measures (App. B.1) play this role on manifolds, in the same way that the Lebesgue measure plays the usual role of dominating measure to define densities of continuous random variables on R
D.

## A.3 Weak Convergence

A key point in Theorem 1 is weak convergence of the sequence of probability measures (Pt)∞
t=1 to P
†. The intuitive interpretation that this statement simply means that "Pt converges to P
†" is correct, although formally defining convergence of a sequence of measures is still required. Weak convergence provides such a definition, and Pt is said to *converge weakly* to P
†if the sequence of scalars Pt(A) converges to P
†(A) for every A satisfying a technical condition (for intuitive purposes, one can think of this property as holding for every A). In this sense weak convergence is a very natural way of defining convergence of measures: in the limit, Pt will assign the same probability to every set as P
†.

## A.4 Pushforward Measures

We have seen that to a random variable X in R
D corresponds a distribution µX. Applying a function h : R
D → R
dto X will result in a new random variable, h(X) in R
d, and it is natural to ask what its distribution is. This distribution is called the *pushforward measure* of µX through h, which is denoted as h\#µX, and is defined as h\#µX(B) = µX(h
−1(B)) for every subset B of R
d. A way to intuitively understand this concept is that if one could sample X from µX, then sampling from h\#µX can be done by simply applying h to X. Note that here h\#µX is a measure on R
d.

The concept of pushforward measure is relevant in Theorem 2 as it allows us to formally reason about e.g.

the distribution of encoded data, g\#P
∗. Similarly, for a distribution PZ corresponding to our second-step model, we can reason about the distribution obtained after decoding, i.e. G\#PZ.

## B Proofs B.1 Riemannian Measures

We begin with a quick review of a Riemannian measures. Let M be a d-dimensional Riemannian manifold with Riemannian metric g, and let (*U, ϕ*) be a chart. The local Riemannian measure µ
(g)
M,ϕ on M (with its Borel σ-algebra) is given by:

$$\mu_{,\mathcal{M},\phi}^{(\mathfrak{g})}(A)=\int_{\phi(A\cap U)}\sqrt{\operatorname*{det}\left(\mathfrak{g}\left({\frac{\partial}{\partial\phi^{i}}},{\frac{\partial}{\partial\phi^{j}}}\right)\right)}\mathrm{d}\mu_{d}$$

for any measurable A ⊆ M. The Riemannian measure µ
(g)
M on M is such that:

$$(3)$$
$$\mu_{\mathcal{M}}^{(\mathfrak{g})}(A\cap U)=\mu_{\mathcal{M},\phi}^{(\mathfrak{g})}(A)$$
$$\left(4\right)$$
M,ϕ(A) (4)
for every measurable A ⊆ M and every chart (*U, ϕ*).

If g1 and g2 are two Riemannian metrics on M, then µ
(g1)
M ≪ µ
(g2)
M and µ
(g1)
M admits a continuous and positive density with respect to µ
(g2)
M . Thus, as mentioned in the main manuscript, smoothness of probability measures is indeed independent of the choice of Riemannian metric.

Below we prove a lemma which we will later use, showing that much like the Lebesgue measure, Riemannian measures assign positive measure to nonempty open sets. While we are sure this is a known property, we could not find a proof and thus provide one.

Lemma 1: Let M be a d-dimensional Riemannian manifold, and µ
(g)
M a Riemannian measure on it. Let A ⊆ M be a nonempty open set in M. Then µ
(g)
M (A) > 0.

Proof: Let (*U, ϕ*) be a chart such that U ∩ A ̸= ∅, which exists because A ̸= ∅. Clearly U ∩ A is open, and since ϕ is a diffeomorphism onto its image, it follows that ϕ(U ∩ A) ⊆ R
dis also open and nonempty, and thus µd(ϕ(U ∩ A)) > 0. As a result,

$$\mu^{(\mathfrak{g})}_{\mathcal{M}}(A)\geq\mu^{(\mathfrak{g})}_{\mathcal{M}}(U\cap A)=\int_{\phi(U\cap A)}\sqrt{\det\left(\mathfrak{g}\left(\frac{\partial}{\partial\phi^{i}},\frac{\partial}{\partial\phi^{j}}\right)\right)}\mathrm{d}\mu_{d}>0,\tag{5}$$

where the last inequality follows since the integrand is positive and the integration set has positive measure.

## B.2 Manifold Overfitting Theorem

We restate the manifold overfitting theorem below for convenience:
Theorem 1 (Manifold Overfitting): Let M ⊂ R
D be an analytic d-dimensional embedded submanifold of R
D with *d < D*, and P
† a smooth probability measure on M. Then there exists a sequence of probability measures (Pt)∞
t=1 on R
D such that:

$$1.\ \ \mathbb{P}_{t}\to\mathbb{P}^{\dagger}{\mathrm{~weakly~as~}}t\to\infty.$$

2. For every t ≥ 1, Pt ≪ µD and Pt admits a density pt : R
D → R>0 with respect to µD such that:

$${\mathrm{(a)~}}\operatorname*{lim}_{t\to\infty}p_{t}(x)=\infty{\mathrm{~for~every~}}x\in{\mathcal{M}}.$$
* $\lim_{t\to\infty}p_{t}(x)=0$ for every $x\notin\operatorname{cl}(\mathcal{M})$, where $\operatorname{cl}(\cdot)$ denotes closure in $\mathbb{R}^{D}$.  
Before proving the theorem, note that P
†is a distribution on M and Pt is a distribution on R
D, with their respective Borel σ-algebras. Weak convergence is defined for measures on the same probability space, and so we slightly abuse notation and think of P
† as a measure on R
D assigning to any measurable set A ⊆ R
D
the probability P
†(A ∩ M), which is well-defined as M is an embedded submanifold of R
D. We do not differentiate between P
† on M and P
† on R
D to avoid cumbersome notation.

Proof: Let Y be a random variable whose law is P
†, and let (Zt)∞
t=1 be a sequence of i.i.d. standard Gaussians in R
D, independent of Y . We assume all the variables are defined on the same probability space (Ω, F, P).

Let Xt = Y + σtZt where (σt)∞
t=1 is a positive sequence converging to 0. Let Pt be the law of Xt.

First we prove 1. Clearly σtZt → 0 in probability and Y → Y in distribution as t → ∞. Since σtZt converges in probability to a constant, it follows that Xt → Y in distribution, and thus Pt → P
† weakly.

Now we prove that Pt ≪ µD. Let A ⊆ R
D be a measurable set such that µD(A) = 0. We denote the law of σtZt as Gt and the Gaussian density in R
D with mean m and covariance matrix Σ evaluated at w as N (w; m, Σ). Let B = {(*w, y*) ∈ R
D × M : w + y ∈ A}. By Fubini's theorem:

$$\mathbb{P}_{t}(A)=\mathbb{P}(Y+\sigma_{t}Z_{t}\in A)=\int_{B}\mathrm{d}\mathbb{G}_{t}\times\mathbb{P}^{\dagger}(w,y)=\int_{B}\mathcal{N}(w;0,\sigma_{t}^{2}\mathbf{1}_{D})\,\mathrm{d}\mu_{D}\times\mathbb{P}^{\dagger}(w,y)$$ $$=\int_{A\times\mathcal{M}}\mathcal{N}(x-y;0,\sigma_{t}^{2}\mathbf{1}_{D})\,\mathrm{d}\mu_{D}\times\mathbb{P}^{\dagger}(x,y)=\int_{\mathcal{M}}\int_{A}\mathcal{N}(x-y;0,\sigma_{t}^{2}\mathbf{1}_{D})\,\mathrm{d}\mu_{D}(x)\,\mathrm{d}\mathbb{P}^{\dagger}(y)$$ $$=\int_{\mathcal{M}}0\,\mathrm{d}\mathbb{P}^{\dagger}(y)=0.$$
†(y) (7)
Then, Pt ≪ µD, proving the first part of 2. Note also that:

 (6)  $\text{}$  (7)  (8)  $\text{}$
$$p_{t}(x)=\int_{\mathcal{M}}\mathcal{N}(x-y;0,\sigma_{t}^{2}\mathrm{I}_{D})\,\mathrm{d}\mathbb{P}^{\dagger}(y)$$
$$({\mathfrak{g}})$$
is a valid density for Pt with respect to µD, once again by Fubini's theorem since, for any measurable set A ⊆ R
D:

$$\int_{A}p_{t}(x)\,\mathrm{d}\mu_{D}(x)=\int_{A}\int_{\mathcal{M}}\mathcal{N}(x-y;0,\sigma_{t}^{2}\mathrm{I}_{D})\,\mathrm{d}\mathbb{P}^{\dagger}(y)\,\mathrm{d}\mu_{D}(x)$$ $$=\int_{A\times\mathcal{M}}\mathcal{N}(x-y;0,\sigma_{t}^{2}\mathrm{I}_{D})d\mu_{D}\times\mathbb{P}^{\dagger}(x,y)=\mathbb{P}_{t}(A).$$

We now prove 2a. Since P
† being smooth is independent of the choice of Riemannian measure, we can assume without loss of generality that the Riemannian metric g on M is the metric inherited from thinking of M
as a submanifold of R
D, and we can then take a continuous and positive density p
† with respect to the Riemannian measure µ
(g)
M associated with this metric.

Take x ∈ M and let BMr
(x) = {y ∈ M : d
(g)
M (*x, y*) ≤ r} denote the geodesic ball on M of radius r centered at x, where d
(g)
M is the geodesic distance. We then have:

pt(x) = Z M N (x − y; 0, σ2 t ID) dP †(y) ≥ Z BMσt (x) N (x − y; 0, σ2 t ID) dP †(y) (12) = Z BMσt (x) p †(y) · N (x − y; 0, σ2 t ID) dµ (g) M (y) ≥ Z BMσt (x) inf y′∈BMσt (x) p †(y ′)N (x − y ′; 0, σ2 t ID) dµ (g) = µ (g) M (BMσt (x)) · inf y′∈BMσt (x) p †(y ′)N (x − y ′; 0, σ2 t ID) (14) ≥ µ (g) M (BMσt (x)) · inf y′∈BMσt (x) N (x − y ′; 0, σ2 t ID) · inf y′∈BMσt (x) p †(y ′). (15)
M (y) (13)
Since BMσt
(x) is compact in M for small enough σt and p
†is continuous in M and positive, it follows that infy′∈BMσt
(x) p
†(y
′) is bounded away from 0 as t → ∞. It is then enough to show that as t → ∞,

$$\mu_{\mathcal{M}}^{(\mathfrak{g})}(B_{\sigma_{t}}^{\mathcal{M}}(x))\cdot\operatorname*{inf}_{y^{\prime}\in B_{\sigma_{t}}^{\mathcal{M}}(x)}\mathcal{N}(x-y^{\prime};0,\sigma_{t}^{2}\mathrm{I}_{D})\to\infty$$

in order to prove that 2a holds. Let Bd r
(0) denote an L2 ball of radius r in R
dcentered at 0 ∈ R
d, and let µd denote the Lebesgue measure on R
d, so that µd(Bd r
(0)) = Cdr d, where Cd > 0 is a constant depending only on d. It is known that µ
(g)
M (BMr
(x)) = µd(Bd r
(0)) · (1 + O(r 2)) for analytic d-dimensional Riemannian manifolds (Gray, 1974), and thus:

µ (g) M (BMσt (x)) · inf y′∈BMσt (x) N (x − y ′; 0, σ2 t ID) = Cdσ d t 1 + O(σ 2 t )· inf y′∈BMσt (x) 1 σ D t (2π)D/2 exp − ∥x − y ′∥ 2 2 2σ 2 t (17) =Cd (2π)D/2 ·1 + O(σ 2 t )· σ d−D t· exp  sup y′∈BMσt (x) ∥x − y ′∥ 2 2   −  . (18) 2σ 2 t The first term is a positive constant, and the second term converges to 1. The third term goes to infinity since
$$(12)$$

$$(16)$$
d < D, which leaves only the last term. Thus, as long as the last term is bounded away from 0 as t → ∞,
we can be certain that the product of all four term goes to infinity. In particular, verifying the following equation would be enough:

$$\sup_{y^{\prime}\in B^{\mathcal{M}}_{\varepsilon}(x)}\|x-y^{\prime}\|_{2}^{2}\leq\sigma_{t}^{2}.\tag{19}$$

This equation holds, since for any x, y′ ∈ M, it is the case that ∥x − y
′∥2 ≤ d
(g)
M (*x, y*′) as g is inherited from M being a submanifold of R
D.

Now we prove 2b for pt. Let x ∈ R
D \ cl(M). We have:

pt(x) = Z M N (x − y; 0, σ2 t ID) dP †(y) ≤ Z M sup y′∈M N (x − y ′; 0, σ2 t ID) dP †(y) = sup y′∈M N (x − y ′; 0, σ2 t ID) (20)  − inf y′∈M ∥x − y ′∥ 2 2 2σ 2 t =1 σ D t (2π)D/2 · exp   = sup y′∈M 1 σ D t (2π)D/2 exp − ∥x − y ′∥ 2 2 2σ 2 t  t→∞ −−−→ 0, (21)
where convergence to 0 follows from x /∈ cl(M) implying that infy′∈M ∥x − y
′∥
2 2 > 0.

## B.3 Two-Step Correctness Theorem

We restate the two-step correctness theorem below for convenience:
Theorem 2 (Two-Step Correctness): Let M ⊆ R
D be a C
1 d-dimensional embedded submanifold of R
D,
and let P
∗ be a distribution on M. Assume there exist measurable functions G : R
d → R
D and g : R
D → R
d such that G(g(x)) = x, P
∗-almost surely. Then:
1. G\#(g\#P
∗) = P
∗, where h\#P denotes the pushforward of measure P through the function h.

2. Moreover, if P
∗is smooth, and G and g are C
1, then:
(a) g\#P
∗ ≪ µd.

(b) G(g(x)) = x for every x ∈ M, and the functions g˜ : M → g(M) and G˜ : g(M) → M given by g˜(x) = g(x) and G˜(z) = G(z) are diffeomorphisms and inverses of each other.

Similarly to the manifold overfitting theorem, we think of P
∗ as a distribution on R
D, assigning to any Borel set A ⊆ R
D the probability P
∗(A∩M), which once again is well-defined since M is an embedded submanifold of R
D.

Proof: We start with part 1. Let A = {x ∈ R
D : G(g(x)) ̸= x}, which is a null set under P
∗ by assumption.

By applying the definition of pushforward measure twice, for any measurable set B ⊆ M:

$$G_{\#}(g_{\#}\mathbb{P}^{*})(B)=g_{\#}\mathbb{P}^{*}(G^{-1}(B))=\mathbb{P}^{*}(g^{-1}(G^{-1}(B)))=\mathbb{P}^{*}\left(g^{-1}\left(G^{-1}\left(\left(B\setminus A\right)\cup\left(A\cap B\right)\right)\right)\right)$$ $$=\mathbb{P}^{*}\left(g^{-1}\left(G^{-1}\left(B\setminus A\right)\right)\cup g^{-1}\left(G^{-1}\left(A\cap B\right)\right)\right)=\mathbb{P}^{*}(g^{-1}\left(G^{-1}\left(B\setminus A\right)\right))$$ $$=\mathbb{P}^{*}(B\setminus A)=\mathbb{P}^{*}(B),$$
$$\begin{array}{c}{{(22)}}\\ {{(23)}}\\ {{(24)}}\end{array}$$

where we used that g
−1(G−1(A ∩ B)) ⊆ A, and thus G\#(g\#P
∗) = P
∗. Note that this derivation requires thinking of P
∗ as a measure on R
D to ensure that A and g
−1(G−1(A ∩ B)) can be assigned 0 probability.

We now prove 2b. We begin by showing that G(g(x)) = x for all x ∈ M. Consider R
D × M endowed with the product topology. Clearly R
D × M is Hausdorff since both R
D and M are Hausdorff (M is Hausdorff by the definition of a manifold). Let E = {(*x, x*) ∈ R
D × M : x *∈ M}*, which is then closed in R

D × M (since diagonals of Hausdorff spaces are closed). Consider the function H : M → R
D × M given by H(x) = (G(g(x)), x), which is clearly continuous. It follows that H−1(E) = {x ∈ M : G(g(x)) = x}
is closed in M, and thus M \ H−1(E) = {x ∈ M : G(g(x)) ̸= x} is open in M, and by assumption P

∗(M \ H−1(E)) = 0. It follows by Lemma 1 in App. B.1 that M \ H−1(E) = ∅, and thus G(g(x)) = x for all x ∈ M.

We now prove that g˜ is a diffeomorphism. Clearly g˜ is surjective, and since it admits a left inverse (namely G), it is also injective. Then g˜ is bijective, and since it is clearly C
1 due to g being C
1 and M being an embedded submanifold of R
D, it only remains to show that its inverse is also C
1. Since G(g(x)) = x for every x ∈ M, it follows that G(g(M)) = M, and thus G˜ is well-defined (i.e. the image of its domain is indeed contained in its codomain). Clearly G˜ is a left inverse to g˜, and by bijectivity of g˜, it follows G˜ is its inverse. Finally, G˜ is also C
1since G is C
1, so that g˜ is indeed a diffeomorphism.

Now, we prove 2a. Let K ⊂ R
d be such that µd(K) = 0. We need to show that g\#P
∗(K) = 0 in order to complete the proof. We have that:

$$g_{\#}\mathbb{P}^{*}(K)=\mathbb{P}^{*}\left(g^{-1}(K)\right)=\mathbb{P}^{*}\left(g^{-1}(K)\cap\mathcal{M}\right).$$
−1(K) ∩ M. (25)
Let g be a Riemannian metric on M. Since P
∗ ≪ µ
(g)
M by assumption, it is enough to show that µ
(g)
M (g
−1(K)∩
M) = 0. Let {Uα}α be an open (in M) cover of g
−1(K) ∩M. Since M is second countable by definition, by Lindelöf's lemma there exists a countable subcover {Vβ}β∈N. Since g|M is a diffeomorphism onto its image,

$$(25)$$

(Vβ, g|Vβ
) is a chart for every β ∈ N. We have:

µ (g) M g −1(K) ∩ M= µ (g) M  g −1(K) ∩ M ∩ [ β∈N Vβ   = µ (g) M  [ β∈N g −1(K) ∩ M ∩ Vβ   (26) β∈N µ (g) M (g −1(K) ∩ M ∩ Vβ) (27) ≤ X β∈N Z g|Vβ (g−1(K)∩M∩Vβ) vuutdet g  ∂ ∂g| i Vβ ,∂ ∂g| j Vβ !! dµd = 0, (28) = X
where the final equality follows from g|Vβ
(g
−1(K) *∩ M ∩* Vβ) ⊆ K for every β ∈ N and µd(K) = 0.

## C Experimental Details C.1 Model Losses

Throughout this section we use L to denote the loss of different models. We use notation that assumes all of these are first-step models, i.e. datapoints are denoted as xn, but we highlight that when trained as second-step models, the datapoints actually correspond to zn. Similarly, whenever a loss includes D, this should be understood as d for second-step models. The description of these losses here is meant only for reference, and we recommend that any reader unfamiliar with these see the relevant citations in the main manuscript. Unlike our main manuscript, measure-theoretic notation is not needed to describe these models, and we thus drop it.

AEs As mentioned in the main manuscript, we train autoencoders with a squared reconstruction error:

$${\cal L}(g,G)=\frac{1}{N}\sum_{n=1}^{N}\|G(g(x_{n}))-x_{n}\|_{2}^{2}.\tag{29}$$

ARMs The loss of autoregressive models is given by the negative log-likelihood:

$${\cal L}(p)=-\frac{1}{N}\sum_{n=1}^{N}\left(\log p(x_{n,1})+\sum_{m=2}^{D}\log p(x_{n,m}|x_{n,1},\ldots,x_{n,m-1})\right),\tag{30}$$

where xn,m denotes the mth coordinate of xn.

AVB Adversarial variational Bayes is highly related to VAEs (see description below), except the approximate posterior is defined implicitly, so that a sample U from q(·|x) can be obtained as U = g˜(*x, ϵ*), where ϵ ∼ pϵ(·),
which is often taken as a standard Gaussian of dimension ˜d, and g˜ : R
D × R
d˜ → R
d. Since q(·|x) cannot be evaluated, the ELBO used to train VAEs becomes intractable, and thus a discriminator T : R
D × R
d → R is introduced, and for fixed q(·|x), trained to minimize:

$$\mathcal{L}(T)=-\sum_{n=1}^{N}\left(\mathbb{E}_{U\sim q(\cdot|x_{n})}[\log s(T(x_{n},U))]+\mathbb{E}_{U\sim p_{U}(\cdot)}[\log(1-s(T(x_{n},U)))]\right),\tag{31}$$

where s(·) denotes the sigmoid function. Denoting the optimal T as T
∗, the rest of the model components are trained through:

$${\cal L}(G,\sigma_{X})=\frac{1}{N}\sum_{n=1}^{N}\mathbb{E}_{U\sim q(\cdot|x_{n})}[T^{*}(x_{n},U)-\log p(x_{n}|U)],\tag{32}$$

where p(xn|U) depends on G and σX in an identical way as in VAEs (see below). Analogously to VAEs, this training procedure maximizes a lower bound on the log-likelihood, which is tight when the approximate posterior matches the true one. Finally, zn can either be taken as:

$$z_{n}=\mathbb{E}_{U\sim q(\cdot|x_{n})}[U]=\mathbb{E}_{\epsilon\sim p_{\epsilon}(\cdot)}[\tilde{g}(x_{n},\epsilon)],\qquad\text{or}\qquad z_{n}=\tilde{g}(x_{n},0).\tag{33}$$

We use the former, and approximate the expectation through a Monte Carlo average. Note that both options define g through g˜ in such a way that zn = g(xn). Finally, in line with Goodfellow et al. (2014), we found that using the "log trick" to avoid saturation in the adversarial loss further improved performance.

BiGAN Bidirectional GANs model the data as X = G(Z), where Z ∼ p˜Z, and p˜Z is taken as a ddimensional standard Gaussian. Note that this p˜Z is different from pZ in the main manuscript (which corresponds to the density of the second-step model), hence why we use different notation. BiGANs also aim to recover the zn corresponding to each xn, and so also use an encoder g, in addition to a discriminator T : R
D × R
d → R. All the components are trained through the following objective:

$${\cal L}(g,G;T)=\mathbb{E}_{Z\sim\hat{p}_{Z}(\cdot)}[T(G(Z),Z)]-\frac{1}{N}\sum_{n=1}^{N}T(x_{n},g(x_{n})),\tag{34}$$

which is minimized with respect to g and G, but maximized with respect to T. We highlight that this objective is slightly different than the originally proposed BiGAN objective, as we use the Wasserstein loss
(Arjovsky et al., 2017) instead of the original Jensen-Shannon. In practice we penalize the gradient of T
as is often done for the Wasserstein objective (Gulrajani et al., 2017). We also found that adding a square reconstruction error as an additional regularization term helped improve performance.

EBM Energy-based models use an energy function E : R
D → R, which implicitly defines a density on R
D
as:

$$p(x)=\frac{e^{-E(x)}}{\int_{\mathbb{R}^{D}}e^{-E(x^{\prime})}\mathrm{d}\mu_{D}(x^{\prime})}.\tag{35}$$

These models attempt to minimize the negative log-likelihood:

$${\cal L}(E)=-\frac{1}{N}\sum_{n=1}^{N}\log p(x_{n}),\tag{36}$$
$$(37)$$

which is seemingly intractable due to the integral in (35). However, when parameterizing E with θ as a neural network Eθ, gradients of this loss can be obtained thanks to the following identity:

$$-\nabla_{\theta}\log p_{\theta}(x_{n})=\nabla_{\theta}E_{\theta}(x_{n})-\mathbb{E}_{X\sim p_{\theta}}\left[\nabla_{\theta}E_{\theta}(X)\right],$$

where we have also made the dependence of p on θ explicit. While it might seem that the expectation in (37)
is just as intractable as the integral in (35), in practice approximate samples from pθ are obtained through Langevin dynamics and are used to approximate this expectation.

NFs Normalizing flows use a bijective neural network h : R
D → R
D, along with a base density pU on R

D, often taken as a standard Gaussian, and model the data as X = h(U), where U ∼ pU . Thanks to the change-of-variable formula, the density of the model can be evaluated:

$$p(x)=p_{U}(h^{-1}(x))|\operatorname*{det}J_{h^{-1}}(x)|,$$
−1(x))| det Jh−1 (x)|, (38)
and flows can thus be trained via maximum-likelihood:

$${\mathcal L}(h)=-\frac{1}{N}\sum_{n=1}^{N}\left(\log p_{U}(h^{-1}(x_{n}))+\log|\det J_{h^{-1}}(x_{n})|\right).$$

In practice h is constructed in such a way that not only ensures it is bijective, but also ensures that log | det Jh−1 (xn)| can be efficiently computed.

$$(38)$$
$$(39)$$

VAEs Variational autoencoders define the generative process for the data as U ∼ pU , X|U ∼ p(·|U).

Typically, pU is a standard d-dimensional Gaussian (although a learnable prior can also be used), and in our case, p(·|u) is given by a Gaussian:

$$p(x|u)={\mathcal{N}}(x;G(u),\sigma_{X}^{2}(u)I_{D}),$$
$$(40)$$

where σX : R
d → R is a neural network. Maximum-likelihood is intractable since the latent variables un
corresponding to xn are unobserved, so instead an approximate posterior q(u|x) is introduced. We take q to
be Gaussian:
$q(u|x)={\cal N}(u;g(x),{\rm diag}(\sigma_{U}^{2}(x))),$  $q(u|x)={\cal N}(u;g(x),{\rm diag}(\sigma_{U}^{2}(x))),$  $q(u|x)={\cal N}(u;g(x),{\rm diag}(\sigma_{U}^{2}(x))),$ 
U (x))), (41)
where σ
2
U
: R
D → R
d
>0
is a neural network, and diag(σ
2
U
(x)) denotes a diagonal matrix whose nonzero
elements are given by σ
2
U
(x). An objective called the negative ELBO is then minimized:
$${\mathcal{L}}(g,G,\sigma_{U},\sigma_{X})={\frac{1}{N}}\sum_{n=1}^{N}\left(\mathbb{KL}(q(\cdot|x_{n})|p_{U}(\cdot))-\mathbb{E}_{U\sim q(\cdot|x_{n})}\left[\log p(x_{n}|U)\right]\right).$$
$$(41)^{\frac{1}{2}}$$
$$(42)$$
The ELBO can be shown to be a lower bound to the log-likelihood, which becomes tight as the approximate posterior matches the true posterior. Note that zn corresponds to the mean of the unobserved latent un:

$$z_{n}=\mathbb{E}_{U\sim q(\cdot|x_{n})}[U]=g(x_{n}).$$
$$(43)$$
zn = EU∼q(·|xn)[U] = g(xn). (43)
We highlight once again that the notation we use here corresponds to VAEs when used as first-step models.

When used as second-step models, as previously mentioned, the observed datapoint xn becomes zn, but in this case the encoder and decoder functions do not correspond to g and G anymore. Similarly, for second-step models, the unobserved variables un become "irrelevant" in terms of the main contents of our paper, and are not related to zn in the same way as in first-step models. For second-step models, we keep the latent dimension as d still.

WAEs Wasserstein autoencoders, similarly to BiGANs, model the data as X = G(Z), where Z ∼ p˜Z, which is taken as a d-dimensional standard Gaussian, and use a discriminator T : R
d → R. The WAE objective is given by:

$$\mathcal{L}(g,G;T)=\frac{1}{N}\sum_{n=1}^{N}\left(\|G(g(x_{n}))-x_{n}\|_{2}^{2}+\lambda\log(1-s(T(g(x_{n}))))\right)+\lambda\mathbb{E}_{Z\sim\hat{\mathbf{z}}_{Z}(\cdot)}[\log s(T(Z))],\tag{44}$$

where s(·) denotes the sigmoid function, λ > 0 is a hyperparameter, and the objective is minimized with respect to g and G, and maximized with respect to T. Just as in AVB, we found that using the "log trick" of Goodfellow et al. (2014) in the adversarial loss further improved performance.

## C.2 Vae From Fig. 2

We generated N = 1000 samples from P
∗ = 0.3δ−1 + 0.7δ1, resulting in a dataset containing 1 a total of 693 times. The Gaussian VAE had d = 1, D = 1, and both the encoder and decoder have a single hidden layer with 25 units and ReLU activations. We use the Adam optimizer (Kingma & Ba, 2015) with learning rate 0.001 and train for 200 epochs. We use gradient norm clipping with a value of 10.

## C.3 Simulated Data

For the ground truth, we use a von Mises distribution with parameter κ = 1, and transform to Cartesian coordinates to obtain a distribution on the unit circle in R
D = R
2. We generate N = 1000 samples from this distribution. For the EBM model, we use an energy function with two hidden layers of 25 units each and Swish activations (Ramachandran et al., 2017). We use the Adam optimizer with learning rate 0.01, and gradient norm clipping with value of 1. We train for 100 epochs. We follow Du & Mordatch (2019) for the training of the EBM, and use 0.1 for the objective regularization value, iterate Langevin dynamics for 60 iterations at every training step, use a step size of 10 within Langevin dynamics, sample new images with probability 0.05 in the buffer, use Gaussian noise with standard deviation 0.005 in Langevin dynamics, and truncate gradients to (−0.03, 0.03) in Langevin dynamics. For the AE+EBM model, we use an AE with d = 1 and two hidden layers of 20 units each with ELU activations (Clevert et al., 2016). We use the Adam optimizer with learning rate 0.001 and train for 200 epochs. We use gradient norm clipping with a value of 10. For the EBM of this model, we use an energy function with two hidden layers of 15 units each, and all the other parameters are identical to the single step EBM. We observed some variability with respect to the seed for both the EBM and the AE+EBM models; the manuscript shows the best performing versions.

For the additional results of Sec. D.1, the ground truth is given by a Gaussian whose first coordinate has mean 0 and variance 2, while the second coordinate has mean 1 and variance 1, and they have a covariance of 0.5.

The VAEs are identical to those from Fig. 2, except their input and output dimensions change accordingly.

## C.4 Comparisons Against Maximum-Likelihood And Ood Detection With Implicit Models

For all experiments, we use the Adam optimizer, typically with learning rate 0.001. For all experiments we also clip gradient entries larger than 10 during optimization. We also set d = 20 in all experiments.

## C.4.1 Single And First-Step Models

For all single and first-step models, unless specified otherwise, we pre-process the data by scaling it, i.e.

dividing by the maximum absolute value entry. All convolutions have a kernel size of 3 and stride 1. For all versions with added Gaussian noise, we tried standard deviation values σ ∈ {1, 0.1, 0.01, 0.001, 0.0001} and kept the best performing one (σ = 0.1, as measured by FID) unless otherwise specified.

AEs For MNIST and FMNIST, we use MLPs for the encoder and decoder, with ReLU activations. The encoder and decoder have each a single hidden layer with 256 units. For SVHN and CIFAR-10, we use convolutional networks. The encoder and decoder have 4 convolutional layers with (32, 32, 16, 16) and
(16, 16, 32, 32) channels, respectively, followed by a flattening operation and a fully-connected layer. The convolutional networks also use ReLU activations, and have kernel size 3 and stride 1. We perform early stopping on reconstruction error with a patience of 10 epochs, for a maximum of 100 epochs.

ARMs We use an updated version of RNADE (Uria et al., 2013), where we use an LSTM (Hochreiter & Schmidhuber, 1997) to improve performance. More specifically, every pixel is processed sequentially through the LSTM, and a given pixel is modelled with a mixture of Gaussians whose parameters are given by transforming the hidden state obtained from all the previous pixels through a linear layer. The dimension of a pixel is given by the number of channels, so that MNIST and FMNIST use mixtures of 1-dimensional Gaussians, whereas SVHN and CIFAR-10 use mixtures of 3-dimensional Gaussians. We also tried a continuous version of the PixelCNN model (van den Oord et al., 2016b), where we replaced the discrete distribution over pixels with a mixture of Gaussians, but found this model highly unstable - which is once again consistent with manifold overfitting - and thus opted for the LSTM-based model. We used 10 components for the Gaussian mixtures, and used an LSTM with 2 layers and hidden states of size 256. We train for a maximum of 100 epochs, and use early stopping on log-likelihood with a patience of 10. We also use cosine annealing on the learning rate. For the version with added Gaussian noise, we used σ = 1.0. We observed some instabilities in training these single step models, particularly when not adding noise, where the final model was much worse than average (over 100 difference in FID score). We treated these runs as failed runs and excluded them from the averages and standard errors reported in our paper.

AVB We use the exact same configuration for the encoder and decoder as in AEs, and use an MLP with 2 hidden layers of size 256 each for the discriminator, which also uses ReLU activations. We train the MLPs for a maximum of 50 epochs, and CNNs for 100 epochs, using cosine annealing on the learning rates. For the large version, AVB+, we use two hidden layers of 256 units for the encoder and decoder MLPs, and increase the encoder and decoder number of hidden channels to (64, 64, 32, 32) and (32, 32, 64, 64), respectively, for convolutional networks. In all cases, the encoder takes in 256-dimensional Gaussian noise with covariance 9 · ID. We also tried having the decoder output per-pixel variances, but found this parameterization to be numerically unstable, which is again consistent with manifold overfitting.

BiGAN As mentioned in Sec. C.1, we used a Wasserstein-GAN (W-GAN) objective (Arjovsky et al., 2017)
with gradient penalties (Gulrajani et al., 2017) where both the data and latents are interpolated between the real and generated samples. The gradient penalty weight was 10. The generator-encoder loss includes the W-GAN loss, and the reconstruction loss (joint latent regressor from Donahue et al. (2017)), equally weighted. For both small and large versions, we use the exact same configuration for the encoder, decoder, and discriminator as for AVB. We used learning rates of 0.0001 with cosine annealing over 200 epochs. The discriminator was trained for two steps for every step taken with the encoder/decoder. EBMs For MNIST and FMNIST, our energy functions use MLPs with two hidden layers with 256 and 128 units, respectively. For SVHN and CIFAR-10, the energy functions have 4 convolutional layers with hidden channels (64, 64, 32, 32). We use the Swish activation function and spectral normalization in all cases. We set the energy function's output regularization coefficient to 1 and the learning rate to 0.0003. Otherwise, we use the same hyperparameters as on the simulated data. At the beginning of training, we scale all the data to between 0 and 1. We train for 100 epochs without early stopping, which tended to halt training too early.

NFs We use a rational quadratic spline flow (Durkan et al., 2019) with 128 hidden units, 4 layers, and 3 blocks per layer. We train using early stopping on validation loss with a patience of 30 epochs, up to a maximum of 100 epochs. We use a learning rate of 0.0005, and use a whitening transform at the start of training to make the data zero-mean and marginally unit-variance, whenever possible (some pixels, particularly in MNIST, were only one value throughout the entire training set); note that this affine transformation does not affect the manifold structure of the data. VAEs The settings for VAEs were largely identical to those of AVB, except we did not do early stopping and always trained for 100 epochs, in addition to not needing a discriminator. For large models a single hidden layer of 512 units was used for each of the encoder and decoder MLPs. We also tried the same decoder per-pixel variance parameterization that we attempted with AVB and obtained similar numerical instabilities, once again in line with manifold overfitting. WAEs We use the adversarial variant rather than the maximum mean discrepancy (Gretton et al., 2012) one. We weight the adversarial loss with a coefficient of 10. The settings for WAEs were identical to those of AVB, except (i) we used a patience of 30 epochs, trained for a maximum of 300 epochs, (ii) we used no learning rate scheduling, with a discriminator learning rate of 2.5×10−4 and an encoder-decoder learning rate of 5×10−4, and (iii) we used only convolutional encoders and decoders, with (64, 64, 32, 32) and (32, 32, 64, 64)
hidden channels, respectively. For large models the number of hidden channels was increased to (96, 96, 48, 48)
and (48, 48, 96, 96) for the encoder and decoder, respectively.

## C.4.2 Second-Step Models

All second-step models, unless otherwise specified, pre-process the encoded data by standardizing it (i.e.

subtracting the mean and dividing by the standard deviation).

ARMs We used the same configuration for second-step ARMs as for the first-step version, except the LSTM has a single hidden layer with hidden states of size 128.

AVB We used the same configuration for second-step AVB as we did for the first-step MLP version of AVB, except that we do not do early stopping and train for 100 epochs. The latent dimension is set to d (i.e. 20).

EBMs We used the same configuration that we used for single-step EBMs, except we use a learning rate of 0.001, we regularize the energy function's output by 0.1, do not use spectral normalization, take the energy function to have two hidden layers with (64, 32) units, and scale the data between −1 and 1.

NFs We used the same settings for second-step NFs as we did for first-step NFs, except (i) we use 64 hidden units, (ii) we do not do early stopping, training for a maximum of 100 epochs, and (iii) we use a learning rate of 0.001.

VAEs We used the same settings for second-step VAEs as we did for first-step VAEs. The latent dimension is also set to d (i.e. 20).

## C.4.3 Parameter Counts

Table 3 includes parameter counts for all the models we consider in Table 1. Two-step models have either fewer parameters than the large one-step model versions, or a roughly comparable amount, except for some exceptions which we now discuss. First, when using normalizing flows as second-step models, we used significantly more complex models than with other two-step models. We did this for added variability in the number of parameters, not because using fewer parameters makes two-step models not outperform their single-step counterparts. Two-step models with an NF as the second-step model outperform other two-step models (see Table 1), but there is a much more drastic improvement from single to two-step models. This difference in improvements further highlights that the main cause for empirical gains is the two-step nature of our models, rather than increased number of parameters. Second, the AE+EBM models use more parameters than their single-step baselines. This was by design, as the architecture of the energy functions mimics that of the encoders of other larger models, except it outputs scalars and thus has fewer parameters, and hence we believe this remains a fair comparison. We also note that AE+EBM models have most of their parameters assigned to the AE, and the second-step EBM contributes only 4k additional parameters. AE+EBM models also train and sample much faster then their single-step EBM+ counterparts. Finally, we finish with the observation that measuring capacity is difficult, and parameter counts simply provide a proxy.

| Table 3: Approximate parameter counts in thousa   | nds.         |               |
|---------------------------------------------------|--------------|---------------|
| MODEL                                             | MNIST/FMNIST | SVHN/CIFAR-10 |
| AVB                                               | 750          | 980           |
| AVB+ / AVB+ σ                                     | 882          | 1725          |
| AVB+ARM                                           | 1021         | 1251          |
| AVB+AVB                                           | 913          | 1143          |
| AVB+EBM                                           | 754          | 984           |
| AVB+NF                                            | 5756         | 5986          |
| AVB+VAE                                           | 771          | 1001          |
| VAE                                               | 412          | 703           |
| VAE+ / VAE+ σ                                     | 824          | 1448          |
| VAE+ARM                                           | 683          | 974           |
| VAE+AVB                                           | 575          | 866           |
| VAE+EBM                                           | 416          | 707           |
| VAE+NF                                            | 5418         | 5709          |
| ARM+ / ARM+ σ                                     | 797          | 799           |
| AE+ARM                                            | 683          | 974           |
| EBM+ / EBM+ σ                                     | 236          | 99            |
| AE+EBM                                            | 416          | 707           |

## D Additional Experimental Results D.1 Simulated Data

As mentioned in the main manuscript, we carry out additional experiments where we have access to the ground truth P
∗in order to further verify that our improvements from two-step models indeed come from mismatched dimensions. Fig. 6 shows the results of running VAE and VAE+VAE models when trying to approximate a nonstandard 2-dimensional Gaussian distribution. First, we can see that when setting the intrinsic dimension of the models to d = 2, the VAE and VAE+VAE models have very similar performance,

![34_image_0.png](34_image_0.png)

with the VAE being slightly better. Indeed, there is no reason to suppose the second-step VAE will have an easier time learning encoded data than the first-step VAE learning the actual data. This result visually confirms that two-step models do not outperform single-step models trained with maximum likelihood when the dimension of maximum-likelihood is correctly specified. Second, we can see that both the VAE and the VAE+VAE models with intrinsic dimension d = 1 underperform their counterparts with d = 2. However, while the VAE model still manages to approximate its target distribution, the VAE+VAE completely fails.

This result visually confirms that two-step models significantly underperform single-step models trained with maximum-likelihood if the data has no low-dimensional structure and the two-step model tries to enforce such structure anyway. Together, these results highlight that the reason two-step models outperform maximum-likelihood so strongly in the main manuscript is indeed the dimensionality mismatch caused by not heeding to the manifold hypothesis.

Figure 6: Results on simulated data: Gaussian ground truth **(top left)**, VAE with d = 1 **(top middle)**,
VAE with d = 2 **(top right)**, VAE+VAE with d = 1 **(bottom left)**, and VAE+VAE with d = 2 (bottom right).

## D.2 Samples

We show samples obtained by the VAE, VAE+, VAE+
σ
, and VAE+ARM models in Fig. 7. In addition to the FID improvements shown in the main manuscript, we can see a very noticeable qualitative improvement obtained by the two-step models. Note that the VAE in the VAE+ARM model is the same as the single-step VAE model. Similarly, we show samples from AVB+
σ
, AVB+NF, AVB+EBM, and AVB+VAE in Fig. 8 where two-step models greatly improve visual quality. We also show samples from the ARM+, ARM+
σ
, and AE+ARM from the main manuscript in Fig. 9; and for the EBM+, EBM+
σ
, and AE+EBM models in Fig. 10.

We can see that FID score is indeed not always indicative of image quality, and that our AE+ARM and AE+EBM models significantly outperform their single-step counterparts (except AE+EBM on MNIST).

Finally, the BiGAN and WAE samples shown in Fig. 11 and Fig. 12 respectively are not consistently better for two-step models, but neither BiGANs nor WAEs are trained via maximum likelihood so manifold overfitting is not necessarily implied by Theorem 1. Other two-step combinations not shown gave similar results.

![35_image_0.png](35_image_0.png)

Figure 7: Uncurated samples from models trained on MNIST **(first row)**, FMNIST **(second row)**, SVHN
(third row), and CIFAR-10 **(fourth row)**. Models are VAE **(first column)**, VAE+ **(second column)**,
VAE+
σ
(third column), and VAE+ARM **(fourth column)**.

![36_image_0.png](36_image_0.png)

Figure 8: Uncurated samples from models trained on MNIST **(first row)**, FMNIST **(second row)**, SVHN
(third row), and CIFAR-10 **(fourth row)**. Models are AVB+
σ
(first column), AVB+EBM (second column), AVB+NF **(third column)**, and AVB+VAE **(fourth column)**.

![37_image_0.png](37_image_0.png)

Figure 9: Uncurated samples from models trained on MNIST **(first row)**, FMNIST **(second row)**, SVHN
(third row), and CIFAR-10 **(fourth row)**. Models are ARM+ **(first column)**, ARM+
σ
(second column),
and AE+ARM **(third column)**.

![38_image_0.png](38_image_0.png)

Figure 10: Uncurated samples with Langevin dynamics run for 60 steps initialized from training buffer on MNIST **(first row)**, FMNIST **(second row)**, SVHN **(third row)**, and CIFAR-10 **(fourth row)**. Models are EBM+ **(first column)**, EBM+
σ
(second column), and AE + EBM **(third column)**.

![39_image_0.png](39_image_0.png)

Figure 11: Uncurated samples from models trained on MNIST **(first row)**, FMNIST **(second row)**,
SVHN **(third row)**, and CIFAR-10 **(fourth row)**. Models are BiGAN **(first column)**, BiGAN+ **(second**
column), BiGAN+AVB **(third column)**, and BiGAN+NF **(fourth column)**. BiGANs are not trained via maximum-likelihood, so Theorem 1 does not imply that manifold overfitting should occur.

![40_image_0.png](40_image_0.png)

Figure 12: Uncurated samples from models trained on MNIST **(first row)**, FMNIST **(second row)**, SVHN
(third row), and CIFAR-10 **(fourth row)**. Models are WAE+ **(first column)**, WAE+ARM **(second**
column), WAE+NF **(third column)**, and WAE+VAE **(fourth column)**. WAEs are not trained via maximum-likelihood, so Theorem 1 does not imply that manifold overfitting should occur.

## D.3 Ebm Improvements

Following Du & Mordatch (2019), we evaluated the single-step EBM's sample quality on the basis of samples initialized from the training buffer. However, when MCMC samples were initialized from uniform noise, we observed that all samples would converge to a small collection of low-quality modes (see Fig. 13). Moreover, at each training epoch, these modes would change, even as the loss value decreased.

The described non-convergence in the EBM's model distribution is consistent with Corollary 1. On the other

![41_image_0.png](41_image_0.png)

hand, when used as a low-dimensional density estimator in the two-step procedure, this problem vanished:
MCMC samples initialized from random noise yielded diverse images. See Fig. 13 for a comparison. Figure 13: Uncurated samples with Langevin dynamics initialized from random noise (with no buffer) trained on MNIST **(first row)**, FMNIST **(second row)**, SVHN **(third row)**, and CIFAR-10 **(fourth row)**. Models are EBM+ with 60 steps **(first column)**, EBM+ with 200 steps **(second column)**, EBM+ with 500 steps
(third column), and AE + EBM with 60 steps, **(fourth column)**.

## D.4 Fid, Precision, And Recall Scores

We show in Tables 4 and 5 precision and recall (along with FID) of all the models used in Sec. 6.2. We opt for the precision and recall scores of Kynkäänniemi et al. (2019) rather than those of Sajjadi et al. (2018) as the former aim to improve on the latter. We also tried the density and coverage metrics proposed by Naeem et al.

(2020), but found these metrics to correlate with visual quality less than FID. Similarly, we also considered using the inception score (Salimans et al., 2016), but this metric is known to have issues (Barratt & Sharma, 2018), and the FID is widely preferred over it. We can see in Tables 4 and 5 that two-step models consistently outperform single-step models in recall, while either also outperforming or not underperforming in precision. Much like with FID score, some instances of AE+ARM have worse scores on both precision and recall than their corresponding single-step model. Given the superior visual quality of those two-step models, we also consider these as failure cases of the evaluation metrics themselves, which we highlight in red in Tables 4 and 5. We believe that some non-highlighted results do not properly reflect the magnitude by which the two-step models outperformed single-step models, and encourage the reader to see the corresponding samples.

We show in Table 6 the FID scores of models involving BiGANs and WAEs. These methods are not trained via maximum likelihood, so Theorem 1 does not apply. In contrast to the likelihood-based models from Table 1, there is no significant improvement in FID for BiGANs and WAEs from using a two-step approach, and sometimes two-step models perform worse. However, for BiGANs we observe similar visual quality in samples
(see Fig. 11), once again highlighting a failure of the FID score as a metric. We show these failures with red in Table 6.

| across 3 runs are shown. Unreliable scores are highlighted in red. MODEL MNIST   | FMNIST                                      |                                            |                             |                 |                 |                 |
|----------------------------------------------------------------------------------|---------------------------------------------|--------------------------------------------|-----------------------------|-----------------|-----------------|-----------------|
| FID                                                                              | Precision                                   | Recall                                     | FID                         | Precision       | Recall          |                 |
| AVB                                                                              | 219.0 ± 4.2                                 | 0.0000 ± 0.0000                            | 0.0008 ± 0.0007 235.9 ± 4.5 | 0.0006 ± 0.0000 | 0.0086 ± 0.0037 |                 |
| AVB+                                                                             | 205.0 ± 3.9                                 | 0.0000 ± 0.0000                            | 0.0106 ± 0.0089 216.2 ± 3.9 | 0.0008 ± 0.0002 | 0.0075 ± 0.0052 |                 |
| AVB+ σ                                                                           | 205.2 ± 1.0                                 | 0.0000 ± 0.0000                            | 0.0065 ± 0.0032 223.8 ± 5.4 | 0.0007 ± 0.0002 | 0.0034 ± 0.0009 |                 |
| AVB+ARM                                                                          | 86.4 ± 0.9                                  | 0.0012 ± 0.0003                            | 0.0051 ± 0.0011             | 78.0 ± 0.9      | 0.1069 ± 0.0055 | 0.0106 ± 0.0011 |
| AVB+AVB                                                                          | 133.3 ± 0.9                                 | 0.0001 ± 0.0000                            | 0.0093 ± 0.0027 143.9 ± 2.5 | 0.0151 ± 0.0015 | 0.0093 ± 0.0019 |                 |
| AVB+EBM                                                                          | 96.6 ± 3.0                                  | 0.0006 ± 0.0000                            | 0.0021 ± 0.0007 103.3 ± 1.4 | 0.0386 ± 0.0016 | 0.0110 ± 0.0013 |                 |
| AVB+NF                                                                           | 83.5 ± 2.0                                  | 0.0009 ± 0.0001                            | 0.0059 ± 0.0015             | 77.3 ± 1.1      | 0.1153 ± 0.0031 | 0.0092 ± 0.0004 |
| AVB+VAE                                                                          | 106.2 ± 2.5                                 | 0.0005 ± 0.0000                            | 0.0088 ± 0.0005 105.7 ± 0.6 | 0.0521 ± 0.0035 | 0.0166 ± 0.0007 |                 |
| VAE                                                                              | 197.4 ± 1.5                                 | 0.0000 ± 0.0000                            | 0.0035 ± 0.0004 188.9 ± 1.8 | 0.0030 ± 0.0006 | 0.0270 ± 0.0048 |                 |
| VAE+                                                                             | 184.0 ± 0.7                                 | 0.0000 ± 0.0000                            | 0.0036 ± 0.0006 179.1 ± 0.2 | 0.0025 ± 0.0003 | 0.0069 ± 0.0012 |                 |
| VAE+ σ                                                                           | 185.9 ± 1.8                                 | 0.0000 ± 0.0000                            | 0.0070 ± 0.0012 183.4 ± 0.7 | 0.0027 ± 0.0002 | 0.0095 ± 0.0036 |                 |
| VAE+ARM                                                                          | 69.7 ± 0.8                                  | 0.0008 ± 0.0000                            | 0.0041 ± 0.0001             | 70.9 ± 1.0      | 0.1485 ± 0.0037 | 0.0129 ± 0.0011 |
| VAE+AVB                                                                          | 117.1 ± 0.8                                 | 0.0002 ± 0.0000                            | 0.0123 ± 0.0002 129.6 ± 3.1 | 0.0291 ± 0.0040 | 0.0454 ± 0.0046 |                 |
| VAE+EBM                                                                          | 74.1 ± 1.0                                  | 0.0007 ± 0.0001                            | 0.0015 ± 0.0006             | 78.7 ± 2.2      | 0.1275 ± 0.0052 | 0.0030 ± 0.0002 |
| VAE+NF                                                                           | 70.3 ± 0.7                                  | 0.0009 ± 0.0000                            | 0.0067 ± 0.0011             | 73.0 ± 0.3      | 0.1403 ± 0.0022 | 0.0116 ± 0.0016 |
| ARM+                                                                             | 98.7 ± 10.6 0.0471 ± 0.0098 0.3795 ± 0.0710 | 72.7 ± 2.1 0.2005 ± 0.0059 0.4349 ± 0.0143 |                             |                 |                 |                 |
| ARM+ σ                                                                           | 34.7 ± 3.1 0.0849 ± 0.0112 0.3349 ± 0.0063  | 23.1 ± 0.9 0.3508 ± 0.0099 0.5653 ± 0.0092 |                             |                 |                 |                 |
| AE+ARM                                                                           | 72.0 ± 1.3                                  | 0.0006 ± 0.0001                            | 0.0038 ± 0.0003             | 76.0 ± 0.3      | 0.0986 ± 0.0038 | 0.0069 ± 0.0005 |
| EBM+                                                                             | 84.2 ± 4.3                                  | 0.4056 ± 0.0145                            | 0.0008 ± 0.0006 135.6 ± 1.6 | 0.6550 ± 0.0054 | 0.0000 ± 0.0000 |                 |
| EBM+ σ                                                                           | 101.0 ± 12.3                                | 0.3748 ± 0.0496                            | 0.0013 ± 0.0008 135.3 ± 0.9 | 0.6384 ± 0.0027 | 0.0000 ± 0.0000 |                 |
| AE+EBM                                                                           | 75.4 ± 2.3                                  | 0.0007 ± 0.0001                            | 0.0008 ± 0.0002             | 83.1 ± 1.9      | 0.0891 ± 0.0046 | 0.0037 ± 0.0009 |

Table 4: FID (lower is better) and Precision, and Recall scores (higher is better). Means ± standard errors across 3 runs are shown. Unreliable scores are highlighted in red.

| across 3 runs are shown. Unreliable scores are highlighted in red. MODEL SVHN   |              | CIFAR-10        |                                                             |                 |                 |
|---------------------------------------------------------------------------------|--------------|-----------------|-------------------------------------------------------------|-----------------|-----------------|
| FID                                                                             | Precision    | Recall          | FID                                                         | Precision       | Recall          |
| AVB                                                                             | 356.3 ± 10.2 | 0.0148 ± 0.0035 | 0.0000 ± 0.0000 289.0 ± 3.0                                 | 0.0602 ± 0.0111 | 0.0000 ± 0.0000 |
| AVB+                                                                            | 352.6 ± 7.6  | 0.0088 ± 0.0018 | 0.0000 ± 0.0000 297.1 ± 1.1                                 | 0.0902 ± 0.0192 | 0.0000 ± 0.0000 |
| AVB+ σ                                                                          | 353.0 ± 7.2  | 0.0425 ± 0.0293 | 0.0000 ± 0.0000 305.8 ± 8.7                                 | 0.1304 ± 0.0460 | 0.0000 ± 0.0000 |
| AVB+ARM                                                                         | 56.6 ± 0.6   | 0.6741 ± 0.0090 | 0.0206 ± 0.0011 182.5 ± 1.0                                 | 0.4670 ± 0.0037 | 0.0003 ± 0.0001 |
| AVB+AVB                                                                         | 74.5 ± 2.5   | 0.5765 ± 0.0157 | 0.0224 ± 0.0008 183.9 ± 1.7                                 | 0.4617 ± 0.0078 | 0.0006 ± 0.0003 |
| AVB+EBM                                                                         | 61.5 ± 0.8   | 0.6809 ± 0.0092 | 0.0162 ± 0.0020 189.7 ± 1.8                                 | 0.4543 ± 0.0094 | 0.0006 ± 0.0002 |
| AVB+NF                                                                          | 55.4 ± 0.8   | 0.6724 ± 0.0078 | 0.0217 ± 0.0007 181.7 ± 0.8                                 | 0.4632 ± 0.0024 | 0.0009 ± 0.0001 |
| AVB+VAE                                                                         | 59.9 ± 1.3   | 0.6698 ± 0.0105 | 0.0214 ± 0.0010 186.7 ± 0.9                                 | 0.4517 ± 0.0046 | 0.0006 ± 0.0001 |
| VAE                                                                             | 311.5 ± 6.9  | 0.0098 ± 0.0030 | 0.0018 ± 0.0012 270.3 ± 3.2                                 | 0.0805 ± 0.0016 | 0.0000 ± 0.0000 |
| VAE+                                                                            | 300.1 ± 2.1  | 0.0133 ± 0.0014 | 0.0000 ± 0.0000 257.8 ± 0.6                                 | 0.1287 ± 0.0183 | 0.0001 ± 0.0000 |
| VAE+ σ                                                                          | 302.2 ± 2.0  | 0.0086 ± 0.0018 | 0.0004 ± 0.0003 257.8 ± 1.7                                 | 0.1328 ± 0.0152 | 0.0000 ± 0.0000 |
| VAE+ARM                                                                         | 52.9 ± 0.3   | 0.7004 ± 0.0016 | 0.0234 ± 0.0005 175.2 ± 1.3                                 | 0.4865 ± 0.0055 | 0.0004 ± 0.0001 |
| VAE+AVB                                                                         | 64.0 ± 1.3   | 0.6234 ± 0.0110 | 0.0273 ± 0.0006 176.7 ± 2.0                                 | 0.5140 ± 0.0123 | 0.0007 ± 0.0002 |
| VAE+EBM                                                                         | 63.7 ± 3.3   | 0.6983 ± 0.0071 | 0.0163 ± 0.0008 181.7 ± 2.8                                 | 0.4849 ± 0.0098 | 0.0002 ± 0.0001 |
| VAE+NF                                                                          | 52.9 ± 0.3   | 0.6902 ± 0.0059 | 0.0243 ± 0.0011 175.1 ± 0.9                                 | 0.4755 ± 0.0095 | 0.0007 ± 0.0002 |
| ARM+                                                                            | 168.3 ± 4.1  | 0.1425 ± 0.0086 | 0.0759 ± 0.0031 162.6 ± 2.2 0.6093 ± 0.0066 0.0313 ± 0.0061 |                 |                 |
| ARM+ σ                                                                          | 149.2 ± 10.7 | 0.1622 ± 0.0210 | 0.0961 ± 0.0069 136.1 ± 4.2 0.6585 ± 0.0116 0.0993 ± 0.0106 |                 |                 |
| AE+ARM                                                                          | 60.1 ± 3.0   | 0.5790 ± 0.0275 | 0.0192 ± 0.0014 186.9 ± 1.0                                 | 0.4544 ± 0.0073 | 0.0008 ± 0.0002 |
| EBM+                                                                            | 228.4 ± 5.0  | 0.0955 ± 0.0367 | 0.0000 ± 0.0000 201.4 ± 7.9                                 | 0.6345 ± 0.0310 | 0.0000 ± 0.0000 |
| EBM+ σ                                                                          | 235.0 ± 5.6  | 0.0983 ± 0.0183 | 0.0000 ± 0.0000 200.6 ± 4.8                                 | 0.6380 ± 0.0156 | 0.0000 ± 0.0000 |
| AE+EBM                                                                          | 75.2 ± 4.1   | 0.5739 ± 0.0299 | 0.0196 ± 0.0035 187.4 ± 3.7                                 | 0.4586 ± 0.0117 | 0.0006 ± 0.0001 |

Table 5: FID (lower is better) and Precision, and Recall scores (higher is better). Means ± standard errors across 3 runs are shown. Unreliable scores are highlighted in red.

| MODEL     | MNIST       | FMNIST      | SVHN        | CIFAR-10    |
|-----------|-------------|-------------|-------------|-------------|
| BiGAN     | 150.0 ± 1.5 | 139.0 ± 1.0 | 105.5 ± 5.2 | 170.9 ± 4.3 |
| BiGAN+    | 135.2 ± 0.2 | 113.0 ± 0.6 | 114.4 ± 4.9 | 152.9 ± 0.6 |
| BiGAN+ARM | 112.6 ± 1.6 | 94.9 ± 0.7  | 60.8 ± 1.6  | 210.7 ± 1.6 |
| BiGAN+AVB | 149.9 ± 3.3 | 141.5 ± 1.7 | 67.2 ± 2.6  | 215.7 ± 1.0 |
| BiGAN+EBM | 120.7 ± 4.7 | 108.1 ± 2.4 | 66.5 ± 1.3  | 217.5 ± 1.8 |
| BiGAN+NF  | 112.4 ± 1.4 | 95.0 ± 0.8  | 60.2 ± 1.5  | 211.6 ± 1.7 |
| BiGAN+VAE | 127.9 ± 1.6 | 115.5 ± 1.4 | 63.6 ± 1.4  | 216.3 ± 1.2 |
| WAE       | 19.8 ± 1.6  | 45.1 ± 0.8  | 52.7 ± 0.6  | 187.4 ± 0.4 |
| WAE+      | 16.7 ± 0.4  | 45.2 ± 0.2  | 53.2 ± 0.4  | 179.7 ± 1.3 |
| WAE+ARM   | 15.2 ± 0.5  | 46.1 ± 0.3  | 73.1 ± 1.8  | 182.3 ± 1.7 |
| WAE+AVB   | 17.6 ± 0.3  | 47.7 ± 0.9  | 60.2 ± 3.8  | 157.6 ± 0.8 |
| WAE+EBM   | 23.7 ± 1.0  | 60.2 ± 1.4  | 70.6 ± 1.5  | 161.0 ± 4.7 |
| WAE+NF    | 20.7 ± 2.2  | 52.1 ± 2.9  | 57.6 ± 3.8  | 178.2 ± 2.8 |
| WAE+VAE   | 16.4 ± 0.6  | 50.9 ± 0.5  | 72.2 ± 1.9  | 178.3 ± 2.6 |

Table 6: FID scores (lower is better) for non-likelihood based GAEs and two-step models. These GAEs are not trained to maximize likelihood, so Theorem 1 does not apply. Means ± standard errors across 3 runs are shown. Unreliable scores are shown in red. Samples for unreliable scores are provided in Fig. 11.

## D.5 High Resolution Image Generation

As mentioned in the main manuscript, we attempted to use our two-step methodology to improve upon a high-performing GAN model: a StyleGAN2 (Karras et al., 2020b). We used the PyTorch (Paszke et al., 2019)
code of Karras et al. (2020a), which implements the optimization-based projection method of Karras et al.

(2020b). That is, we did not explicitly construct g, and used this optimization-based GAN inversion method to recover {zn}
N
n=1 on the FFHQ dataset (Karras et al., 2019), with the intention of training low-dimensional DGMs to produce high resolution images. This method projects into the intermediate 512-dimensional space referred to as W by default (Karras et al., 2020b). We also adapted this method to the GAN's true latent space, referred to as Z, during which we decreased the initial learning rate to 0.01 from the default of 0.1. In experiments with optimization-based inversion into the latent spaces g(M) = W and g(M) = Z,
reconstructions {G(zn)}
N
n=1 yielded FIDs of 13.00 and 25.87, respectively. In contrast, the StyleGAN2 achieves an FID score of 5.4 by itself, which is much better than the scores achieved by the reconstructions
(perfect reconstructions would achieve scores of 0).

The FID between the reconstructions and the ground truth images represents an approximate lower-bound on the FID score attainable by the two-step method, since the second step estimates the distribution of the projected latents {zn}
N
n=1. Since reconstructing the entire FFHQ dataset of 70000 images would be expensive
(for instance, W-space reconstructions take about 90 seconds per image), we computed the FID (again using the code of Karras et al. (2020a)) between the first 10000 images of FFHQ and their reconstructions.

We also experimented with the approach of Huh et al. (2020), which inverts into Z-space, but it takes about 10 minutes per image and was thus prohibitively expensive. Most other GAN inversion work (Xia et al.,
2021) has projected images into the extended 512 × 18-dimensional W+ space, which describes a different intermediate latent input w for each layer of the generator. Since this latent space is higher in dimension than the true model manifold, we did not pursue these approaches. The main obstacle to improving StyleGAN2's FID using the two-step procedure appears to be reconstruction quality. Since the goal of our experiments is to highlight the benefits of two-step procedures rather than proposing new GAN inversion methods, we did not further pursue this direction, although we hope our results will encourage research improving GAN inversion methods and exploring their benefits within two-step models.

## D.6 Ood Detection

OOD Metric We now precisely describe our classification metric, which properly accounts for datasets of imbalanced size and ensures correct directionality, in that higher likelihoods are considered to be indistribution. First, using the in- and out-of-sample training likelihoods, we train a decision stump - i.e.

a single-threshold-based classifier. Then, calling that threshold T, we count the number of in-sample test likelihoods which are greater than T, nI>T , and the number of out-of-sample test likelihoods which are greater than T, nO>T . Then, calling the number of in-sample test points nI , and the number of OOD test points nO, our final classification rate acc is given as:

$$\mathbf{acc}=\frac{n_{I>T}+\frac{n_{I}}{n_{O}}\cdot(n_{O}-n_{O>T})}{2n_{I}}.\tag{45}$$

Intuitively, we can think of this metric as simply the fraction of correctly-classified points (i.e. acc′ =
nO>T +(nO−nI>T )
nI+nO), but with the contributions from the OOD data re-weighted by a factor of nI
nO
to ensure both datasets are equally weighted in the metric. Note that this metric is sometimes referred to as balanced accuracy, and can also be understood as the average between the true positive and true negative rates.

We show further OOD detection results using log pZ in Table 7, and using log pX in Table 8. Note that, for one-step models, we record results for log pX, the log-density of the model, in place of log pZ (which is not defined).

Table 7: OOD classification accuracy as a percentage (higher is better), using log pZ. Means ± standard errors across 3 runs are shown. Arrows point from in-distribution to OOD data.

| MODEL   | FMNIST → MNIST   | CIFAR-10 → SVHN   |
|---------|------------------|-------------------|
| AVB+    | 96.0 ± 0.5       | 23.4 ± 0.1        |
| AVB+ARM | 89.9 ± 2.4       | 40.6 ± 0.2        |
| AVB+AVB | 74.4 ± 2.2       | 45.2 ± 0.2        |
| AVB+EBM | 49.5 ± 0.1       | 49.0 ± 0.0        |
| AVB+NF  | 89.2 ± 0.9       | 46.3 ± 0.9        |
| AVB+VAE | 78.4 ± 1.5       | 40.2 ± 0.1        |
| VAE+    | 96.1 ± 0.1       | 23.8 ± 0.2        |
| VAE+ARM | 92.6 ± 1.0       | 39.7 ± 0.4        |
| VAE+AVB | 80.6 ± 2.0       | 45.4 ± 1.1        |
| VAE+EBM | 54.1 ± 0.7       | 49.2 ± 0.0        |
| VAE+NF  | 91.7 ± 0.3       | 47.1 ± 0.1        |
| ARM+    | 9.9 ± 0.6        | 15.5 ± 0.0        |
| AE+ARM  | 86.5 ± 0.9       | 37.4 ± 0.2        |
| EBM+    | 32.5 ± 1.1       | 46.4 ± 3.1        |
| AE+EBM  | 50.9 ± 0.2       | 49.4 ± 0.6        |

| MODEL     | FMNIST → MNIST   | CIFAR-10 → SVHN   |
|-----------|------------------|-------------------|
| AVB+      | 96.0 ± 0.5       | 23.4 ± 0.1        |
| AVB+ARM   | 90.8 ± 1.8       | 37.7 ± 0.5        |
| AVB+AVB   | 75.0 ± 2.2       | 43.7 ± 2.0        |
| AVB+EBM   | 53.3 ± 7.1       | 39.1 ± 0.9        |
| AVB+NF    | 89.2 ± 0.8       | 43.9 ± 1.3        |
| AVB+VAE   | 78.7 ± 1.6       | 40.2 ± 0.2        |
| VAE+      | 96.1 ± 0.1       | 23.8 ± 0.2        |
| VAE+ARM   | 93.7 ± 0.7       | 37.6 ± 0.4        |
| VAE+AVB   | 82.4 ± 2.4       | 42.2 ± 1.0        |
| VAE+EBM   | 63.7 ± 1.7       | 42.4 ± 0.9        |
| VAE+NF    | 91.7 ± 0.3       | 42.4 ± 0.3        |
| ARM+      | 9.9 ± 0.6        | 15.5 ± 0.0        |
| AE+ARM    | 89.5 ± 0.2       | 33.8 ± 0.3        |
| EBM+      | 32.5 ± 1.1       | 46.4 ± 3.1        |
| AE+EBM    | 56.9 ± 14.4      | 34.5 ± 0.1        |
| BiGAN+ARM | 81.5 ± 1.4       | 35.7 ± 0.4        |
| BiGAN+AVB | 59.6 ± 3.2       | 34.3 ± 2.3        |
| BiGAN+EBM | 57.4 ± 1.7       | 47.7 ± 0.7        |
| BiGAN+NF  | 83.7 ± 1.2       | 39.2 ± 0.3        |
| BiGAN+VAE | 59.3 ± 2.1       | 35.6 ± 0.4        |
| WAE+ARM   | 89.0 ± 0.5       | 38.1 ± 0.6        |
| WAE+AVB   | 74.5 ± 1.3       | 43.1 ± 0.7        |
| WAE+EBM   | 36.5 ± 1.6       | 36.8 ± 0.4        |
| WAE+NF    | 85.7 ± 2.8       | 40.2 ± 1.8        |
| WAE+VAE   | 87.7 ± 0.7       | 38.3 ± 0.4        |

Table 8: OOD classification accuracy as a percentage (higher is better), using log pX. Means ± standard errors across 3 runs are shown. Arrows point from in-distribution to OOD data.

![46_image_0.png](46_image_0.png)

Figure 14: Comparison of the distribution of log-likelihood values between in-distribution (green) and out-of-distribution (blue) data. In both cases, the two-step models push the in-distribution likelihoods further to the right than the NF+ model alone. N.B .: The absolute value of the likelihoods in the NF+ model on its own are off by a constant factor because of the aforementioned whitening transform used to scale the data before training. However, the relative value within a single plot remains correct.

![46_image_1.png](46_image_1.png)

Figure 15: Comparison of the distribution of log-likelihood values between in-distribution (green) and out-ofdistribution (blue) data for VAE-based models. While the VAE+ model does well on FMNI$T->MNIST, its performance is poor for CIFAR-10->SVHN. The two-step model VAE+NF improves on the CIFAR-10->SVHN
task.

![47_image_0.png](47_image_0.png)

Figure 16: Comparison of the distribution of log-likelihood values between in-distribution (green) and out-ofdistribution (blue) data for AVB-based models. While the AVB+ model does well on FMNIST->MNIST, its performance is poor for CIFAR-10->SVHN. The two-step model AVB+NF improves on the CIFAR-10->SVHN
task.