RedTachyon
commited on
Commit
•
4d5bd37
1
Parent(s):
cc42420
Upload folder using huggingface_hub
Browse files- Ofi4f77DoK/10_image_0.png +3 -0
- Ofi4f77DoK/15_image_0.png +3 -0
- Ofi4f77DoK/15_image_1.png +3 -0
- Ofi4f77DoK/15_image_2.png +3 -0
- Ofi4f77DoK/15_image_3.png +3 -0
- Ofi4f77DoK/16_image_0.png +3 -0
- Ofi4f77DoK/2_image_0.png +3 -0
- Ofi4f77DoK/3_image_0.png +3 -0
- Ofi4f77DoK/4_image_0.png +3 -0
- Ofi4f77DoK/5_image_0.png +3 -0
- Ofi4f77DoK/5_image_1.png +3 -0
- Ofi4f77DoK/8_image_0.png +3 -0
- Ofi4f77DoK/9_image_0.png +3 -0
- Ofi4f77DoK/Ofi4f77DoK.md +438 -0
- Ofi4f77DoK/Ofi4f77DoK_meta.json +25 -0
Ofi4f77DoK/10_image_0.png
ADDED
Git LFS Details
|
Ofi4f77DoK/15_image_0.png
ADDED
Git LFS Details
|
Ofi4f77DoK/15_image_1.png
ADDED
Git LFS Details
|
Ofi4f77DoK/15_image_2.png
ADDED
Git LFS Details
|
Ofi4f77DoK/15_image_3.png
ADDED
Git LFS Details
|
Ofi4f77DoK/16_image_0.png
ADDED
Git LFS Details
|
Ofi4f77DoK/2_image_0.png
ADDED
Git LFS Details
|
Ofi4f77DoK/3_image_0.png
ADDED
Git LFS Details
|
Ofi4f77DoK/4_image_0.png
ADDED
Git LFS Details
|
Ofi4f77DoK/5_image_0.png
ADDED
Git LFS Details
|
Ofi4f77DoK/5_image_1.png
ADDED
Git LFS Details
|
Ofi4f77DoK/8_image_0.png
ADDED
Git LFS Details
|
Ofi4f77DoK/9_image_0.png
ADDED
Git LFS Details
|
Ofi4f77DoK/Ofi4f77DoK.md
ADDED
@@ -0,0 +1,438 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Mdae : Modified Denoising Autoencoder For Missing Data Imputation
|
2 |
+
|
3 |
+
Anonymous authors Paper under double-blind review
|
4 |
+
|
5 |
+
## Abstract
|
6 |
+
|
7 |
+
This paper introduces a methodology based on Denoising AutoEncoder (DAE) for missing data imputation. The proposed methodology, called mDAE hereafter, results from a modification of the loss function and a straightforward procedure for choosing the hyperparameters. An ablation study shows on several UCI Machine Learning Repository datasets, the benefit of using this modified loss function and an overcomplete structure, in terms of Root Mean Squared Error (RMSE) of reconstruction. This numerical study is completed by comparing the mDAE methodology with eight other methods (four standard and four more recent). A criterion called Mean Distance to Best (MDB) is proposed to measure how a method performs globally well on all datasets. This criterion is defined as the mean (over the datasets) of the distances between the RMSE of the considered method and the RMSE
|
8 |
+
of the best method. According to this criterion, the mDAE methodology was consistently ranked among the top methods (along with SoftImput and missForest), while the four more recent methods were systematically ranked last. The Python code of the numerical study will be available on GitHub so that results can be reproduced or generalized with other datasets and methods.
|
9 |
+
|
10 |
+
## 1 Introduction
|
11 |
+
|
12 |
+
With the rapid increase in data collection, missing values are a ubiquitous challenge across various domains.
|
13 |
+
|
14 |
+
Data may be missing for several reasons. For instance, it was never collected, records were lost or merging several datasets failed. It is then generally necessary to deal with this problem before performing machine learning methods on these data. Several options are available to address this issue, including removing or recreating the missing values. Removing rows or columns containing missing data results in a considerable loss of information when missing data is distributed across multiple locations in the dataset. One usually prefers missing-data imputation, which consists of filling missing entries with estimated values using the observed data. Missing data imputation is a very active research area (Van Buuren, 2018; Little & Rubin, 2019) with more than 150 implementations available according to Mayer et al. (2021). This paper focuses on state-of-the-art imputation methods categorized as standard machine learning, deep learning or optimal transport. Methods based on standard machine learning include, among others, k-nearest neighbours
|
15 |
+
(Troyanskaya et al., 2001), matrix completion via iterative soft-thresholded SVD (Mazumder et al., 2010), Multivariate Imputation by Chained Equations (Van Buuren & Groothuis-Oudshoorn, 2011) or MissForest
|
16 |
+
(Stekhoven & Bühlmann, 2012). Methods based on deep learning include, among others, Generative Adversarial Networks (Goodfellow et al., 2014; Yoon et al., 2018), Variational AutoEncoders (Kingma & Welling, 2013; Ivanov et al., 2018; Mattei & Frellsen, 2019; Peis et al., 2022) and methods based on Denoising AutoEncoders (see e.g. the review of Pereira et al., 2020). One can also mention the recent works of Muzellec et al. (2020); Zhao et al. (2023) based on optimal transport.
|
17 |
+
|
18 |
+
This paper proposes a modified Denoising AutoEncoder (mDAE) dedicated to imputing missing values in numerical tabular data. AutoEncoders (AE) (Bengio et al., 2009) are artificial neural networks used to learn efficient representation of unlabeled data (encodings) and a decoding function that recreates the input data from the encoded representation. Denoising AutoEncoders (DAEs) were first proposed by Vincent et al.
|
19 |
+
|
20 |
+
(2008) to recover, from noisy data, the original data without noise by corrupting the inputs of a standard AE.
|
21 |
+
|
22 |
+
For example, inputs can be corrupted by masking noise where a fixed proportion of the inputs are randomly set to 0. DAEs, initially proposed for extracting robust features in the deep learning context, have also been used for missing data imputation (see e.g. Duan et al., 2014; Gondara & Wang, 2018; Ryu et al., 2020). Indeed, DAEs, designed to recover a clean output from a noisy input, are naturally suited as an imputation method by considering missing values as a particular case of noisy input. The review of Pereira et al. (2020)
|
23 |
+
covers 26 papers that use AEs and their variants (DAEs and VAEs) for the imputation of tabular data.
|
24 |
+
|
25 |
+
In all these articles, except Beaulieu-Jones & Moore (2017), the reconstruction of missing data with DAEs boils down to applying a DAE to pre-imputed data (e.g., by mean imputation). Pre-imputation solves the problem of loss functions that cannot handle missing values and require all features to be complete. However, in doing so, DAEs learn to reconstruct pre-imputed values, which does not seem relevant. In the same spirit as Beaulieu-Jones & Moore (2017), we propose to deal with this problem by modifying the loss function to ignore pre-imputed missing values. We show in an ablation study that using this modified loss function in the mDAE methodology results in better reconstruction of missing values than using the unmodified loss function on pre-imputed data, thus showing the contribution of the mDAE method compared with previous imputation methods based on DAEs. Moreover, Pereira et al. (2020) points out that the vast majority of these methods do not present justifications for the decisions performed for the choice of the structure of the DAE and the choice of the hyper-parameters. Most choices are based on empirical guesses, while just a few exceptions use grid-search approaches. Following the recommendations of Pereira et al. (2020), we propose a general and reproducible grid-search methodology for choosing the hyper-parameter and the structure.
|
26 |
+
|
27 |
+
Moreover, the ablation study provides recommendations for structure and hyper-parameter selection that can be used when the grid-search approach is too computationally expensive.
|
28 |
+
|
29 |
+
As mentioned above, the proposed mDAE methodology is evaluated via an ablation study to check the relevance of some of its components (modification of the loss function, choice of the hyperparameter by cross-validation, and overcomplete structure). The importance of each component is evaluated using the Root Mean Squared Error (RMSE) of reconstruction of missing values artificially added to 7 datasets from the UCI Machine Learning Repository Dua & Graff (2017). As far as we know, there are no standard benchmark datasets for missing data imputation; the 26 papers studied in the review paper of Pereira et al. (2020) almost all use different datasets. Here, we have chosen 7 of the 23 UCI Machine Learning Repository datasets recently used by Muzellec et al. (2020) for imputation methods comparison. These 7 datasets had to be all numerical (as the mDAE method is suited for numerical missing values only), with different sizes, and not too numerous (to avoid the experimental setup being time-consuming and impractical). This ablation study has two objectives. Firstly, the benefits of using the modified loss function will be shown, and thus, the mDAE method will be compared with standard DAE imputation approaches. Secondly, to show the benefit of choosing an overcomplete structure for the DAE and to verify the benefit of choosing the hyper-parameter by optimization in a grid.
|
30 |
+
|
31 |
+
After this ablation study, the mDAE method is compared with eight other imputation methods (4 based on standard machine learning and 4 based on deep learning and optimal transport) along with the reference method of mean imputation. The 10 imputation methods are compared using again the Root Mean Squared Error (RMSE) of reconstruction of missing values artificially added to the 7 datasets used in the ablation study. Moreover, as in Muzellec et al. (2020) and Zhao et al. (2023), three missing data mechanisms are considered (Missing Completely at Random, Missing At Random, Missing Not At Random). The RMSE scores of the 10 methods are then computed for each of the 7 datasets (and different percentages and mechanisms of artificial missing data). To make results easier to interpret, we propose a new criterion called Mean Distance to the Best (MDB) to measure how a method performs globally well on all datasets (for a given percentage and a given mechanism of artificial missing data). This criterion is defined as the mean
|
32 |
+
(over the datasets) of the distances between the RMSE of the considered method and the RMSE of the best method. It is equal to 0 if the RMSE of the method is the best for all datasets, and it increases if the RMSE
|
33 |
+
of the method is far from the RMSE of the best method, on average, over the datasets. If the proposed mDAE method sometimes gives the best RMSE score (for a given dataset), it is not true for all datasets.
|
34 |
+
|
35 |
+
Considering all datasets, the MDB criterion ranks (for all configurations considered in this numerical study) three methods based on standard machine learning and the mDAE methodology in the top 4 positions (for all configurations considered in this numerical study). More precisely, the mDAE method is generally placed second or third for this criterion (alternating with the missForest method), with the SoftImput method always ranked first. It should be noted that the 4 recent methods based on deep learning and optimal transport are always ranked in the last positions, quite far from the best methods.
|
36 |
+
|
37 |
+
According to Pereira et al. (2020), most papers that use DAEs to impute missing data report better results than SOTA (State Of The Art) methods. However, very different methodologies with different datasets and different SOTA methods were used in these papers, making a general conclusion difficult. One of the contributions of this article is to propose a comparison methodology that is as accurate and reproducible as possible so that other researchers can use it with other datasets or imputation methods using Python codes that will be available on GitHub.
|
38 |
+
|
39 |
+
## 2 The Mdae Method
|
40 |
+
|
41 |
+
AutoEncoders (AE) (Bengio et al., 2009) are well-known artificial neural networks used to learn efficient representation of unlabeled data via an encoding function and to recreate the input data via a decoding function. Here, we are dealing with the special case of tabular numerical data, and we suppose that these data have been normalized so that the p features have zero mean and unit variance. This normalization via feature standardization is more appropriate here than normalizing the values between 0 and 1, as is often done when using autoencoders. The input of the AE is a then set of n observations {x1*, ...,* xn} in R
|
42 |
+
p which forms the rows of a standardized data matrix X = (xij ) of dimension n×p, where p is the number of features.
|
43 |
+
|
44 |
+
The encoding function fθ of a basic autoencoder (see Figure 1) transforms an input xi ∈ R
|
45 |
+
pinto a latent vector yi ∈ R
|
46 |
+
q:
|
47 |
+
yi = fθ(xi) = s(Wxi + b),
|
48 |
+
where W ∈ R
|
49 |
+
q×pis a weight matrix, b ∈ R
|
50 |
+
pis a bias vector and s is an activation function (e.g., ReLU or sigmoid). The decoding function gθ
|
51 |
+
′ then transforms the latent vector yi ∈ R
|
52 |
+
qinto an output zi ∈ R
|
53 |
+
p:
|
54 |
+
|
55 |
+
$$\mathbf{z}_{i}=g_{\theta r}(\mathbf{y}_{i})=s(\mathbf{W}^{\prime}\mathbf{y}_{i}+\mathbf{b}^{\prime}),$$
|
56 |
+
|
57 |
+
![2_image_0.png](2_image_0.png)
|
58 |
+
|
59 |
+
where W′ ∈ R
|
60 |
+
p×q and b
|
61 |
+
′ ∈ R
|
62 |
+
q. Here, the activation function s in the output layer must be the identity function, since we are trying to reconstruct inputs that take their values in R. In fact, the sigmoid (resp.
|
63 |
+
|
64 |
+
ReLu) activation function gives output values between 0 and 1 (resp. positive values) which is not appropriate here.
|
65 |
+
|
66 |
+
Figure 1: Scheme of a basic AutoEncoder (AE).
|
67 |
+
In general, autoencoders have more than one hidden layer and the parameters θ = (W1, ...,WK, b1*, ...,* bK)
|
68 |
+
of the encoder and θ
|
69 |
+
′ = (W′1
|
70 |
+
, ...,W′K, b
|
71 |
+
′ 1
|
72 |
+
, ..., b
|
73 |
+
′
|
74 |
+
K) of the decoder, are learned by minimization of the so called reconstruction loss. With standardized numerical data, the reconstruction loss usually used to learn weights and biases of an autoencoder is the L2 loss defined by:
|
75 |
+
|
76 |
+
$$\mathcal{L}_{AE}=\sum_{i=1}^{n}\underbrace{\|\mathbf{x}_{i}-(g_{\theta^{\prime}}\circ f_{\theta})(\mathbf{x}_{i})\|^{2}}_{L(\mathbf{x}_{i},\mathbf{z}_{i})}=\|\mathbf{X}-\mathbf{Z}\|_{F}^{2},\tag{1}$$
|
77 |
+
|
78 |
+
where L is the loss function defined here as the squared Euclidean distance between the input xi and its reconstruction zi = (gθ
|
79 |
+
′ ◦ fθ)(xi), and ∥X − Z∥F is the Frobenius norm between the data matrix X and its reconstructed matrix Z. Note that this criterion favors the reconstruction of features (columns of X) with high variance. It is therefore important that the data matrix X is standardized.
|
80 |
+
|
81 |
+
Denoising AutoEncoders (DAE) (Vincent et al., 2008) are autoencoders defined to remove noise from a given input. To do this, an autoencoder is trained to output the original data using corrupted data in the input. The masking noise, for instance, is a corrupting process where each observation xiis corrupted by randomly setting a proportion µ of its components to zero. Let N(xi) denotes this corrupted version of xi.
|
82 |
+
|
83 |
+
The loss L(xi, zi) is here slightly different from the one in (1) as it compares the input xi with the output zi = (gθ
|
84 |
+
′ ◦ fθ)(N(xi)) obtained with corrupted observations N(xi) (see Figure 2). The L2 reconstruction loss (1) writes then for DAEs:
|
85 |
+
|
86 |
+
$$\mathcal{L}_{DAE}=\sum_{i=1}^{n}\underbrace{\left\|\mathbf{x}_{i}-(g_{\theta^{\prime}}\circ f_{\theta})(N(\mathbf{x}_{i}))\right\|^{2}}_{L(\mathbf{x}_{i},\mathbf{x}_{i})}=\left\|\mathbf{X}-\mathbf{Z}\right\|_{F}^{2}\tag{1}$$
|
87 |
+
|
88 |
+
![3_image_0.png](3_image_0.png)
|
89 |
+
|
90 |
+
$$\left(2\right)$$
|
91 |
+
|
92 |
+
Note that the proportion µ of the masking noise is a hyper-parameter that may need to be calibrated.
|
93 |
+
|
94 |
+
Figure 2: Scheme of a denoising AutoEncoder (DAE). Red crosses represent the values in N(xi) randomly set to 0.
|
95 |
+
|
96 |
+
## 2.1 Imputing Missing Values Using The Mdae **Methodology**
|
97 |
+
|
98 |
+
Although DAE methods were first proposed for extracting robust features in the deep learning context Vincent et al. (2008), they have also been used for missing data imputation (see e.g. the review of Pereira et al., 2020). Indeed, as DAEs had been defined to reconstruct noisy data, they were naturally suited to reconstruct missing data, the missing data then being considered as noise.
|
99 |
+
|
100 |
+
As Pereira et al. (2020) point out in their review article, almost all works using DAEs to impute missing data boils down to applying a DAE to pre-imputed data (e.g. by mean imputation). Let X be now the incomplete standardized data matrix (the data matrix with missing values). Pre-imputation of X by the mean of each feature simply consists of replacing missing values with 0 since the data are standardized and, therefore, the feature means are all equal to 0. The pre-imputed data matrix X˜ writes then as the projection of X onto the observed entries:
|
101 |
+
|
102 |
+
$${\dot{\mathbf{X}}}=\mathbf{P_{\Omega}}(\mathbf{X})={\left\{\begin{array}{l l}{x_{i j}}&{{\mathrm{if~}}(i,j)\in\Omega,}\\ {\mathbf{0}}&{{\mathrm{if~}}(i,j)\notin\Omega.}\end{array}\right.}$$
|
103 |
+
|
104 |
+
where Ω is the set of indices (i, j) ∈ {1, ..., n} × {1*, ..., p*} where the values xij are not missing. The DAE is then trained to reconstruct the pre-imputed data matrix X˜ by minimization of the reconstruction loss (2)
|
105 |
+
which in this case is:
|
106 |
+
|
107 |
+
$$\mathcal{L}_{DAE}=\sum_{i=1}^{n}\left\|\tilde{\mathbf{x}}_{i}-(g_{\theta^{\prime}}\circ f_{\theta})(N(\mathbf{x}_{i}))\right\|^{2}=\left\|P_{\Omega}(\mathbf{X})-\mathbf{Z}\right\|_{F}^{2},\tag{3}$$
|
108 |
+
|
109 |
+
where Z is now the reconstruction of the pre-imputed matrix X˜ . After training, the missing values in X are replaced by those reconstructed in Z and the imputed data matrix is:
|
110 |
+
|
111 |
+
$${\hat{\mathbf{X}}}=P_{\mathbf{n}}(\mathbf{X})+P_{\mathbf{n}^{\perp}}(\mathbf{Z}),$$
|
112 |
+
$$\left(4\right)$$
|
113 |
+
|
114 |
+
Xˆ = PΩ(X) + PΩ⊥ (Z), (4)
|
115 |
+
where $\Omega^{\perp}$ is the set of indices $(i,j)\in\{1,...,n\}\times\{1,...,p\}$ where $\mathbf{x}_{ij}$ is missing.
|
116 |
+
If using a pre-imputed matrix X˜ solves the problem of the loss function that is unable to handle missing values, minimizing the reconstruction loss (3) learns the DAE to reconstruct zeros at the locations of the missing values, which is irrelevant (see Figure 3). Our proposal is then not only to apply a DAE to the pre-imputed data matrix as in previous works, but also to modify the reconstruction error (3) to skip these locations (see Figure 4). This methodology, herafter called mDAE, performs a DAE on standardized and pre-imputed data, using the following loss function:
|
117 |
+
|
118 |
+
$${\mathcal{L}}_{m D A E}=\|P_{\Omega}(\mathbf{X})-P_{\Omega}(\mathbf{Z})\|_{F}^{2},$$
|
119 |
+
|
120 |
+
![4_image_0.png](4_image_0.png)
|
121 |
+
|
122 |
+
$\frac{1}{5}$
|
123 |
+
|
124 |
+
F , (5)
|
125 |
+
Figure 3: Scheme of a DAE directly applied on pre-imputed data. Violet dots in x˜i represent the missing values set to 0. Red crosses in N(x˜i) represent the values randomly set to 0.
|
126 |
+
|
127 |
+
## 2.2 Choice Of The Hyper-Parameter Μ
|
128 |
+
|
129 |
+
The hyper-parameter µ of the mDAE methodology for missing values imputation, is the proportion of zeros used to corrupt the data with the masking noise (red crosses in N(x˜i) in Figure 4). This hyper-parameter can be chosen randomly in a grid of values µ in [0, 1]. Alternatively, it can be chosen through an optimized procedure to minimize an error of reconstruction of the missing values. For that purpose, the non-missing values of the original data are split into two sets: a training set to learn the parameters and a validation set to estimate the error of reconstruction of missing values. Let V ⊂ Ω be the subset of indices (*i, j*) of the validation set, drawn randomly from the set of observed entries Ω. For each value of µ in the grid, the error of reconstruction of the missing values is estimated using the following procedure:
|
130 |
+
|
131 |
+
1. The parameters of the mDAE are learned on the training set Ω \ V by minimization of the reconstruction loss:
|
132 |
+
$${\mathcal{L}}_{m D A E}=\|P_{\Omega\setminus V}(\mathbf{X})-P_{\Omega\setminus V}(\mathbf{Z})\|_{F}^{2},$$
|
133 |
+
|
134 |
+
F , (6)
|
135 |
+
where Ω \ V is the set of observed entries minus those drawn at random for the validation.
|
136 |
+
$\frac{1}{2}$ and $\frac{1}{2}$ are the same as $\frac{1}{2}$ and $\frac{1}{2}$.
|
137 |
+
|
138 |
+
![5_image_0.png](5_image_0.png)
|
139 |
+
|
140 |
+
Figure 4: Scheme of a mDAE. Violet dots in x˜i represent the missing values set to 0. Violet dots in z˜i represent the predicted values set to 0. Red crosses in N(x˜i) represent the values randomly set to 0
|
141 |
+
2. The mean squared error (MSE) of reconstruction of the missing values is estimated on the validation set by:
|
142 |
+
|
143 |
+
$$M S E_{v a l}={\frac{1}{|V|}}\|P_{V}(\mathbf{X})-P_{V}(\mathbf{Z})\|_{F}^{2}.$$
|
144 |
+
$$\left(7\right)$$
|
145 |
+
F . (7)
|
146 |
+
where Z is the matrix reconstructed with the mDAE learned on the training set Ω \ V and |V | is the cardinal of the validation set.
|
147 |
+
|
148 |
+
The previous two steps are repeated B times (for the B draws of missing values) and the mean of the errors of reconstruction of the missing values is performed to get a more robust estimation.
|
149 |
+
|
150 |
+
## 2.3 Choice Of The Structure
|
151 |
+
|
152 |
+
Two families of structures are known for autoencoders. The undercomplete case where the hidden layer is smaller than the input layer and the overcomplete case where it is bigger. If overcomplete structure is not relevant with autoencoders, it is well-known that denoising autoencoders work well with overcomplete structures. Here, a grid of 6 simple structures (2 undercomplete and four overcomplete) is suggested to choose the "best" structure when using the mDAE method (see Figure 5). For each structure in this grid, the error of reconstruction of the missing values is estimated on validation data, using the same procedure as for the selection of the hyper-parameter µ (see section 2.2). Ideally, the hyper-parameter µ and the structure should be chosen simultaneously by exhaustively considering all possible combinations. However, alternative grid search exists, for instance, by sampling a given number of candidates from the parameter space.
|
153 |
+
|
154 |
+
Figure 5: A grid of 6 simple structures where p is the number of units of the input layer.
|
155 |
+
|
156 |
+
![5_image_1.png](5_image_1.png)
|
157 |
+
|
158 |
+
## 3 Numerical Study
|
159 |
+
|
160 |
+
The first part of this numerical study concerns the properties of the mDAE methodology. More specifically, an ablation study is conducted to verify the relevance of the choices made to construct this methodology. The second part compares the mDAE method with other well-known or more recent methods for imputing missing data.
|
161 |
+
|
162 |
+
All comparisons are made using seven complete tabular datasets (without missing values) chosen among 23 datasets of the UCI Machine Learning Repository, recently used by Muzellec et al. (2020) for imputation methods comparison. These 7 datasets (see Table 1) have been chosen to be all numerical (as the mDAE method is suited for numerical missing values only), with different sizes and not too numerous (to avoid the experimental setup being time-consuming and impractical).
|
163 |
+
|
164 |
+
| Names | Abreviations | Rows | Columns |
|
165 |
+
|---------------------------|----------------|--------|-----------|
|
166 |
+
| Breast cancer diagnostic | breast | 569 | 30 |
|
167 |
+
| Connectionist bench sonar | sonar | 208 | 60 |
|
168 |
+
| Ionosphere | iono | 351 | 34 |
|
169 |
+
| Blood transfusion | blood | 748 | 4 |
|
170 |
+
| Seeds | seeds | 210 | 7 |
|
171 |
+
| Climate model crashes | climate | 540 | 18 |
|
172 |
+
| Wine quality red | wine | 1599 | 10 |
|
173 |
+
|
174 |
+
Table 1: The seven datasets used in the numerical study To evaluate the imputation methods, a certain proportion of each dataset is first artificially replaced by missing values. The artificial missing values are drawn using either the MAR (Missing At Random), the MCAR (Missing Completely At Random) or the MNAR (Missing Not At Random) mechanism (see e.g.
|
175 |
+
|
176 |
+
Rubin, 1976). Note that the MCAR and MAR missing values were generated using a logistic masking model as implemented in the GitHub repository of Muzellec.
|
177 |
+
|
178 |
+
Then, for a given mask Ω
|
179 |
+
⊥ of artificial missing values, the performance of the method is evaluated using the Root Mean Squared Error (RMSE) between the initial data matrix X and the reconstructed data matrix Z on Ω
|
180 |
+
⊥:
|
181 |
+
|
182 |
+
$$R M S E=\sqrt{\frac{1}{|\Omega^{\perp}|}\|P_{\Omega^{\perp}}(\mathbf{X})-P_{\Omega^{\perp}}(\mathbf{Z})\|_{F}^{2}},$$
|
183 |
+
$$({\boldsymbol{\delta}})$$
|
184 |
+
, (8)
|
185 |
+
where |Ω
|
186 |
+
⊥| is the number of artificial missing values. To get more robust results, the process is repeated B
|
187 |
+
times with B sets of artificial missing values drawn randomly using one of the three generation mechanisms. Finally, a method is evaluated by the mean and standard deviation of the B values of RMSE obtained with a certain proportion of artificial missing data and a certain mechanism of missing values (MAR, MCAR or MNAR).
|
188 |
+
|
189 |
+
Note that all the results presented in this section are reproducible using Python code, which will be available on GitHub.
|
190 |
+
|
191 |
+
## 3.1 Ablation Study Of The Mdae **Methodology**
|
192 |
+
|
193 |
+
An ablation study is a methodology used to evaluate the importance of different components of an algorithm, by comparing the results obtained with and without this component. Here, the following components of the mDAE methodology are studied:
|
194 |
+
- the use of the modified reconstruction loss (5) rather than the standard loss (3), - the use of an optimized value of the hyper-parameter µ (as described section 2.2) rather than a value chosen randomly in [0, 1],
|
195 |
+
- the use of an overcomplete structure (the 5th structure in Figure 5) rather than an undercomplete structure (the 2nd structure in Figure 5).
|
196 |
+
|
197 |
+
Table 2 shows the results of the ablation study for the seven datasets and 20% of MCAR artificial missing values. The mean value over the B sets of artificial missing values (± the standard deviation) of the RMSE
|
198 |
+
of reconstruction of the artificial missing values is calculated for each dataset with the mDAE method, with the method deprived of its modified loss function (i.e. with a standard L2 loss function), with the method deprived of its optimized choice of µ (i.e. with a random choice), with the method deprived of its overcomplete structure (i.e. with an under complete structure). Each time, the loss of imputation quality (i.e. the increase of the mean RMSE) is measured between the mDAE without one of the three components (the modified loss, an optimized choice of µ or an overcomplete structure) and the complete mDAE. For instance, for the breast cancer dataset, using the standard L2 loss increases the mean RMSE of 46.99% = 0.685−0.466 0.466 .
|
199 |
+
|
200 |
+
| Method | breast | climate | sonar | iono | seeds | wine | blood |
|
201 |
+
|---------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------|
|
202 |
+
| mDAE | 0.466 ± 0.016 | 1.007 ± 0.007 | 0.656 ± 0.007 | 0.776 ± 0.018 | 0.496 ± 0.022 | 0.790 ± 0.030 | 0.701 ± 0.059 |
|
203 |
+
| mDAE w/o | 0.685 ± 0.036 | 1.005 ± 0.008 | 0.988 ± 0.013 | 0.808 ± 0.020 | 0.587 ± 0.028 | 0.828 ± 0.034 | 0.755 ± 0.058 |
|
204 |
+
| modified loss | (46.996%) | (-0.199%) | (50.610%) | (4.124%) | (18.347%) | (4.810%) | (7.703%) |
|
205 |
+
| mDAE w/o | 0.501 ± 0.043 | 1.030 ± 0.013 | 0.682 ± 0.049 | 0.802 ± 0.039 | 0.514 ± 0.054 | 0.853 ± 0.033 | 0.710 ± 0.055 |
|
206 |
+
| optimal µ | (7.511%) | (2.284%) | (3.963%) | (3.351%) | (3.629%) | (7.975%) | (1.284%) |
|
207 |
+
| mDAE w/o | 0.500 ± 0.011 | 1.147 ± 0.013 | 0.699 ± 0.008 | 0.808 ± 0.025 | 0.671 ± 0.209 | 0.932 ± 0.045 | 0.960 ± 0.140 |
|
208 |
+
| overcomplete | (7.296%) | (13.903%) | (6.555%) | (4.124%) | (35.282%) | (17.975%) | (36.947%) |
|
209 |
+
|
210 |
+
Table 2: Mean RMSE of reconstruction (± the standard deviation) for B = 8 random draws of 20% of MCAR
|
211 |
+
artificial missing values. First row : results of the mDAE method (with the modified loss, the optimal choice of the hyper-parameter µ and with an overcomplete structure). Second row : results without (w/o) the modified loss (with the standard L2 loss instead). Third row : results without (w/o) the optimal choice of µ (with random choice of µ instead). Fourth row : results without (w/o) overcomplete structure (with an undercomplete structure instead). The results in brackets are the growth rate of the average RMSE when the component under consideration is removed. The first row in Table 2 shows that the mDAE methodology with its three components (modified loss function, optimized choice of µ and overcomplete structure 5 of Figure 5) constantly reconstructs missing data better, except for climate data, where modifying the loss function does not improve the results. It should be noted that this last result is consistent with those obtained by Muzellec et al. (2020), who found that, for the climate dataset (and 30% MCAR), the 5 imputation methods compared in their article gave no better results (in terms of RMSE) than imputation by the mean.
|
212 |
+
|
213 |
+
The second row in Table 2 shows the improvement (in terms of RMSE) when using the modified loss function rather than simply a DAE on pre-imputed data (as in previous works). Not using the modified loss function increases the RMSE for the breast and seeds datasets by up to 50%, thus showing the contribution of the mDAE methodology.
|
214 |
+
|
215 |
+
The third row shows that using a random value of the hyper-parameter µ rather than an optimized one deteriorates the imputation quality for all datasets, but to a lesser extent (between 1 and 8% increase in RMSE). The gain obtained by choosing the best µ in a grid rather than randomly in [0, 1] is insignificant. This is an important result, as it allows the user to randomly choose the hyper-parameter µ in [0, 1] to save computation time when necessary.
|
216 |
+
|
217 |
+
The fourth row shows that using an undercomplete structure rather than an overcomplete one clearly increases the RMSE to around 35% for two of the seven datasets. The choice of the structure is a central issue when using DAEs. This result can, therefore, be used to recommend the choice of an overcomplete structure and avoid the search for an optimal structure in a grid. Figure 6 completes the results of Table 2 by 8 looking at the results for the 6 different structures given Figure 5. It shows that here, the two undercomplete
|
218 |
+
|
219 |
+
![8_image_0.png](8_image_0.png)
|
220 |
+
|
221 |
+
structures always give poorer results than the four overcomplete ones.
|
222 |
+
|
223 |
+
Figure 6: Mean RMSE of reconstruction (± the standard deviation) for B = 12 random draws of 20% of MCAR artificial missing values and 6 different structures of mDAE. The two first structures are undercomplete, the three last are overcomplete (see Figure 5).
|
224 |
+
Finally, the results of ablation studies for other types of artificial missing values (MAR and MNAR) and other proportions of artificial missing values (20% and 40%) are given in Appendix A. These results confirm the importance of the modification of the loss function, the importance of choosing an overcomplete structure, and the more relative importance of choosing µ in a grid search rather than a random one.
|
225 |
+
|
226 |
+
## 3.2 Comparison With Other Methods
|
227 |
+
|
228 |
+
This section compares the mDAE method with four relatively classic and four more recent methods (see Table 3). The four first methods are KNN ((Troyanskaya et al., 2001)) where missing values are replaced by a weighted average of the k-nearest neighbours, SoftImput (Mazumder et al., 2010) based on iterative softthresholded SVD, and two iterative chained equation methods (Van Buuren & Groothuis-Oudshoorn, 2011)
|
229 |
+
which model features with missing values as a function of the others: the missForest method (Stekhoven & Bühlmann, 2012) is based on Random Forests and the BayesianRidge method is based on ridge regressions, to estimate at each step the regression functions. The four others (more recent) methods in Table 3 are GAIN (Yoon et al., 2018) which is an adaptation of Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) to impute missing data, MIWAE Mattei & Frellsen (2019) which is an adaptation of Variational AutoEncoders (VAE) (Kingma & Welling, 2013), and two methods using optimal transport: the algorithm called Batch Sinkhorn Imputation proposed by Muzellec et al. (2020), and the method TDM proposed by Zhao et al. (2023).
|
230 |
+
|
231 |
+
For KNN and SoftImpute, the hyperparameters are selected through cross-validation. According to the implementations used for the two chained equation methods, the hyperparameters of the Bayesian ridge regressions are estimated during the fits of the model. The hyperparameters of the Random Forests are 100 trees, and all features are considered when looking for the best split (i.e., bagged trees). The hyperparameter settings recommended in the corresponding papers and implementations are used for the four last methods.
|
232 |
+
|
233 |
+
For the mDAE method, the settings studied section 3.1 (choice of µ by crossvalidation and the overcomplete structure 5 of Figure 5) are used. More favourable settings for the mDAE method would have been to select µ and the structure by cross-validation on all possible parameter combinations. This approach was not adopted for computation time reasons in this numerical study.
|
234 |
+
|
235 |
+
1Available in the class KNNImputer, https://scikit-learn.org/stable/api/sklearn.impute.html 2https://github.com/BorisMuzellec/MissingDataOT
|
236 |
+
3Available in the class IterativeImputer, https://scikit-learn.org/stable/api/sklearn.impute.html 4https://github.com/jsyoon0823/GAIN 5https://github.com/pamattei/miwae 6https://github.com/hezgit/TDM
|
237 |
+
|
238 |
+
| Names | Abreviations |
|
239 |
+
|-----------------------------------------------------------------|----------------|
|
240 |
+
| k-nearest neighbors1 | knn |
|
241 |
+
| SoftImput2 | si |
|
242 |
+
| missForest3 | rf |
|
243 |
+
| BayesianRidge3 | br |
|
244 |
+
| Generative Adversarial Imputation Network4 | gain |
|
245 |
+
| Missing Data Importance Weighted Autoencoders5 | miwae |
|
246 |
+
| Batch Sinkhorn Imputation2 | skh |
|
247 |
+
| Transformed Distribution Matching for missing value imputation6 | tdm |
|
248 |
+
|
249 |
+
Table 3: The methods used in the numerical study
|
250 |
+
With these settings of hyperparameters, the eight methods of Table 3, as well as the mDAE method and
|
251 |
+
|
252 |
+
![9_image_0.png](9_image_0.png)
|
253 |
+
|
254 |
+
the basic mean imputation method, are compared in Figure 7 on the 7 datasets and 20% of MCAR artificial missing values. The mean value (± the standard deviation) of the RMSE of reconstruction of the artificial missing values is plotted for each dataset and each method.
|
255 |
+
|
256 |
+
Figure 7: Mean RMSE of reconstruction (± the standard deviation) for B = 12 random draws of 20% of MCAR artificial missing values.
|
257 |
+
We note in Figure 7 that certain methods like SoftImpute (si), missForest (rf) or mDAE work reasonably well on all datasets (no dataset where the RMSE value is much worse than others). It can also be noted that the mDAE method gives better or equivalent results on the 7 datasets than the four methods based on neural networks and optimal transport (gain, miwae, skh and tdm). But no method always wins. In order to measure how a method performs globally well on several datasets, we propose to use a new metric called Mean Distance to the Best (MDB) hereafter. If I denotes the number of datasets and J the number of methods, the MDB of a method j is defined by:
|
258 |
+
|
259 |
+
$$M D B(j)={\frac{1}{I}}\sum_{i=1}^{I}\left(R_{i j}-\operatorname*{min}_{\ell=1,\ldots,J}R_{i\ell}\right)$$
|
260 |
+
$$({\mathfrak{g}})$$
|
261 |
+
Riℓ(9)
|
262 |
+
where Rij is the RMSE obtained with the method j on the dataset i. MDB(j) interprets as the mean (over the datasets) of the distances between the RMSE of the method j and the RMSE of the best method. It is equal to 0 if the method j is the best for all datasets. It increases if the quality of the method j is far from the quality of the best method, on average over the datasets.
|
263 |
+
|
264 |
+
Figure 8 shows the MDB obtained with 20% of artificial MCAR missing values and the quality (the RMSE)
|
265 |
+
of the methods plotted Figure 7 . This figure shows that the two best methods according to this criterion are SoftImput (si) and missForest (rf). The mDAE method is the 3rd best method. The Figures 9, 10 and 11 in Appendix B show the results with 40% of artificial MCAR missing values, and with 20% or 40% of
|
266 |
+
|
267 |
+
![10_image_0.png](10_image_0.png)
|
268 |
+
|
269 |
+
Figure 8: Mean Distance to the Best (MDB) obtained with 20% of MCAR artificial missing values.
|
270 |
+
MAR and MNAR missing values. With these different proportions and types of missing data, the top four remains SoftImput, missForest, mDAE and BayesianRidge. SofImpute is always in first place, tied once
|
271 |
+
(40% MAR) with mDAE. The mDAE and missForest methods are generally one or the other in second and third position. The same type of results can be obtained for other proportions of artificial missing values using Python codes that will be available on GitHub. The results obtained, for instance, with 10% of MCAR
|
272 |
+
artificial missing values (see Figure 12 in Appendix B) confirm that the mDAE methodology and the three methods SoftImput, BayesianRidge, missForest, rank (according to the MDB criterion) ahead of the KNN,
|
273 |
+
ahead of the two methods gain and miwae (based on deep learning) and ahead the two methods skh and tdm
|
274 |
+
(based on optimal transport). The poor results of the four more recent methods based on neural networks and optimal transport (gain, miwae, skh and tdm) can be explained by the difficulty of choosing the best hyper-parameters, the default configurations recommended by the authors having been used here.
|
275 |
+
|
276 |
+
## 4 Conclusion
|
277 |
+
|
278 |
+
This article proposes a methodology for missing data imputation, based on DAE, as well as a procedure for choosing the hyper-parameters (the proportion of noise µ and the structure of the network). An ablation study of this method was performed with different datasets, different types and proportions of missing data.
|
279 |
+
|
280 |
+
It showed the relatively small improvement of the results when the hyper-parameter µ is chosen by crossvalidation rather than randomly. On the contrary, using an overcomplete rather than an undercomplete network seems appropriate. A specific study is still required to confirm this result, which would enable to recommend the use of a random µ and an overcomplete structure.
|
281 |
+
|
282 |
+
Then, a numerical study compared the proposed mDAE method with eight other standard or recent missing values imputation methods. The results showed the good behavior of SofImput, mDAE and missForest.
|
283 |
+
|
284 |
+
A new criterion called Mean Distance to the Best (MDB) was used to compare the methods globally over all the considered datasets and to rank them. The four most recent methods based on deep learning and optimal transport were systematically found in the last four positions for all types and proportions of artificial missing values. One might think these methods give better results with image or natural language processing data. This should be tested more thoroughly. The Python code for this numerical comparison will be made available on GitHub so that it can be reproduced with other datasets or completed with other methods.
|
285 |
+
|
286 |
+
Finally, the specific features of the mDAE method should make it possible to consider block-wise missing values by imposing a block-wise structuring of the masking noise. This type of missing data is frequent, for instance, with Electronic health records, longitudinal studies or time series data, where failures in sensors and communication can result in a loss of multiple consecutive data points.
|
287 |
+
|
288 |
+
## References
|
289 |
+
|
290 |
+
Brett Beaulieu-Jones and Jason Moore. Missing data imputation in the electronic health record using deeply learned autoencoders. volume 22, pp. 207–218, 02 2017. doi: 10.1142/9789813207813_0021.
|
291 |
+
|
292 |
+
Yoshua Bengio et al. Learning deep architectures for ai. Foundations and trends® *in Machine Learning*, 2
|
293 |
+
(1):1–127, 2009.
|
294 |
+
|
295 |
+
Dheeru Dua and Casey Graff. Uci machine learning repository, 2017. URL http://archive.ics.uci.edu/
|
296 |
+
ml.
|
297 |
+
|
298 |
+
Yanjie Duan, Yisheng Lv, Wenwen Kang, and Yifei Zhao. A deep learning based approach for traffic data imputation. In *17th International IEEE conference on intelligent transportation systems (ITSC)*, pp.
|
299 |
+
|
300 |
+
912–917. IEEE, 2014.
|
301 |
+
|
302 |
+
Lovedeep Gondara and Ke Wang. Mida: Multiple imputation using denoising autoencoders. In Advances in Knowledge Discovery and Data Mining: 22nd Pacific-Asia Conference, PAKDD 2018, Melbourne, VIC,
|
303 |
+
Australia, June 3-6, 2018, Proceedings, Part III 22, pp. 260–272. Springer, 2018.
|
304 |
+
|
305 |
+
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. *Advances in neural information processing* systems, 27, 2014.
|
306 |
+
|
307 |
+
Oleg Ivanov, Michael Figurnov, and Dmitry Vetrov. Variational autoencoder with arbitrary conditioning.
|
308 |
+
|
309 |
+
arXiv preprint arXiv:1806.02382, 2018.
|
310 |
+
|
311 |
+
Diederik P Kingma and Max Welling. Auto-encoding variational bayes. *arXiv preprint arXiv:1312.6114*,
|
312 |
+
2013.
|
313 |
+
|
314 |
+
Roderick JA Little and Donald B Rubin. *Statistical analysis with missing data*, volume 793. John Wiley &
|
315 |
+
Sons, 2019.
|
316 |
+
|
317 |
+
Pierre-Alexandre Mattei and Jes Frellsen. Miwae: Deep generative modelling and imputation of incomplete data sets. In *International conference on machine learning*, pp. 4413–4423. PMLR, 2019.
|
318 |
+
|
319 |
+
Imke Mayer, Aude Sportisse, Julie Josse, Nicholas Tierney, and Nathalie Vialaneix. R-miss-tastic: a unified platform for missing values methods and workflows, 2021.
|
320 |
+
|
321 |
+
Rahul Mazumder, Trevor Hastie, and Robert Tibshirani. Spectral regularization algorithms for learning large incomplete matrices. *The Journal of Machine Learning Research*, 11:2287–2322, 2010.
|
322 |
+
|
323 |
+
Boris Muzellec. MissingDataOT. URL https://github.com/BorisMuzellec/MissingDataOT.
|
324 |
+
|
325 |
+
Boris Muzellec, Julie Josse, Claire Boyer, and Marco Cuturi. Missing data imputation using optimal transport. In *International Conference on Machine Learning*, pp. 7130–7140. PMLR, 2020.
|
326 |
+
|
327 |
+
Ignacio Peis, Chao Ma, and José Miguel Hernández-Lobato. Missing data imputation and acquisition with deep hierarchical models and hamiltonian monte carlo. *Advances in Neural Information Processing Systems*, 35:35839–35851, 2022.
|
328 |
+
|
329 |
+
Ricardo Cardoso Pereira, Miriam Seoane Santos, Pedro Pereira Rodrigues, and Pedro Henriques Abreu. Reviewing autoencoders for missing data imputation: Technical trends, applications and outcomes. *Journal* of Artificial Intelligence Research, 69:1255–1285, 2020.
|
330 |
+
|
331 |
+
Donald B Rubin. Inference and missing data. *Biometrika*, 63(3):581–592, 1976.
|
332 |
+
|
333 |
+
Seunghyoung Ryu, Minsoo Kim, and Hongseok Kim. Denoising autoencoder-based missing value imputation for smart meters. *IEEE Access*, 8:40656–40666, 2020.
|
334 |
+
|
335 |
+
Daniel J Stekhoven and Peter Bühlmann. Missforest—non-parametric missing value imputation for mixedtype data. *Bioinformatics*, 28(1):112–118, 2012.
|
336 |
+
|
337 |
+
Olga Troyanskaya, Michael Cantor, Gavin Sherlock, Pat Brown, Trevor Hastie, Robert Tibshirani, David Botstein, and Russ B Altman. Missing value estimation methods for dna microarrays. *Bioinformatics*, 17
|
338 |
+
(6):520–525, 2001.
|
339 |
+
|
340 |
+
Stef Van Buuren. *Flexible imputation of missing data*. CRC press, 2018.
|
341 |
+
|
342 |
+
Stef Van Buuren and Karin Groothuis-Oudshoorn. mice: Multivariate imputation by chained equations in r. *Journal of statistical software*, 45:1–67, 2011.
|
343 |
+
|
344 |
+
Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In *Proceedings of the 25th international conference on* Machine learning, pp. 1096–1103, 2008.
|
345 |
+
|
346 |
+
Jinsung Yoon, James Jordon, and Mihaela Schaar. Gain: Missing data imputation using generative adversarial nets. In *International conference on machine learning*, pp. 5689–5698. PMLR, 2018.
|
347 |
+
|
348 |
+
He Zhao, Ke Sun, Amir Dezfouli, and Edwin V Bonilla. Transformed distribution matching for missing value imputation. In *International Conference on Machine Learning*, pp. 42159–42186. PMLR, 2023.
|
349 |
+
|
350 |
+
## A Appendix
|
351 |
+
|
352 |
+
| Method | breast | climate | sonar | iono | seeds | wine | blood |
|
353 |
+
|---------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------|
|
354 |
+
| mDAE | 0.535 ± 0.012 | 1.005 ± 0.009 | 0.735 ± 0.011 | 0.793 ± 0.012 | 0.537 ± 0.026 | 0.862 ± 0.029 | 0.761 ± 0.053 |
|
355 |
+
| mDAE w/o | 0.829 ± 0.021 | 1.006 ± 0.008 | 1.007 ± 0.007 | 0.880 ± 0.013 | 0.741 ± 0.029 | 0.927 ± 0.027 | 0.844 ± 0.052 |
|
356 |
+
| modified loss | (54.953%) | (0.100%) | (37.007%) | (10.971%) | (37.989%) | (7.541%) | (10.907%) |
|
357 |
+
| mDAE w/o | 0.538 ± 0.028 | 1.023 ± 0.018 | 0.764 ± 0.041 | 0.832 ± 0.035 | 0.563 ± 0.052 | 0.885 ± 0.055 | 0.746 ± 0.054 |
|
358 |
+
| optimal µ | (0.561%) | (1.791%) | (3.946%) | (4.918%) | (4.842%) | (2.668%) | (-1.971%) |
|
359 |
+
| mDAE w/o | 0.548 ± 0.014 | 1.159 ± 0.014 | 0.774 ± 0.013 | 0.845 ± 0.016 | 0.756 ± 0.203 | 0.959 ± 0.028 | 0.849 ± 0.106 |
|
360 |
+
| overcomplete | (2.430%) | (15.323%) | (5.306%) | (6.557%) | (40.782%) | (11.253%) | (11.564%) |
|
361 |
+
|
362 |
+
| Method | breast | climate | sonar | iono | seeds | wine | blood |
|
363 |
+
|---------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------|
|
364 |
+
| mDAE | 0.484 ± 0.039 | 1.009 ± 0.012 | 0.657 ± 0.033 | 0.834 ± 0.029 | 0.468 ± 0.057 | 0.829 ± 0.042 | 0.613 ± 0.187 |
|
365 |
+
| mDAE w/o | 0.812 ± 0.066 | 1.005 ± 0.011 | 0.978 ± 0.026 | 0.898 ± 0.028 | 0.682 ± 0.088 | 0.892 ± 0.061 | 0.839 ± 0.299 |
|
366 |
+
| modified loss | (67.769%) | (-0.396%) | (48.858%) | (7.674%) | (45.726%) | (7.600%) | (36.868%) |
|
367 |
+
| mDAE w/o | 0.482 ± 0.038 | 1.033 ± 0.015 | 0.686 ± 0.042 | 0.880 ± 0.055 | 0.485 ± 0.071 | 0.888 ± 0.090 | 0.637 ± 0.225 |
|
368 |
+
| optimal µ | (-0.413%) | (2.379%) | (4.414%) | (5.516%) | (3.632%) | (7.117%) | (3.915%) |
|
369 |
+
| mDAE w/o | 0.521 ± 0.035 | 1.161 ± 0.013 | 0.716 ± 0.030 | 0.899 ± 0.046 | 0.830 ± 0.294 | 0.974 ± 0.065 | 0.967 ± 0.345 |
|
370 |
+
| overcomplete | (7.645%) | (15.064%) | (8.980%) | (7.794%) | (77.350%) | (17.491%) | (57.749%) | | Method | breast | climate | sonar | iono | seeds | wine | blood |
|
371 |
+
|---------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------|
|
372 |
+
| mDAE | 0.510 ± 0.032 | 1.005 ± 0.006 | 0.718 ± 0.017 | 0.804 ± 0.018 | 0.511 ± 0.040 | 0.830 ± 0.021 | 0.658 ± 0.090 |
|
373 |
+
| mDAE w/o | 0.854 ± 0.045 | 1.005 ± 0.008 | 1.000 ± 0.024 | 0.891 ± 0.025 | 0.821 ± 0.052 | 0.916 ± 0.027 | 0.893 ± 0.155 |
|
374 |
+
| modified loss | (67.451%) | (0.000%) | (39.276%) | (10.821%) | (60.665%) | (10.361%) | (35.714%) |
|
375 |
+
| mDAE w/o | 0.546 ± 0.059 | 1.033 ± 0.021 | 0.766 ± 0.033 | 0.846 ± 0.046 | 0.537 ± 0.052 | 0.925 ± 0.047 | 0.668 ± 0.099 |
|
376 |
+
| optimal µ | (7.059%) | (2.786%) | (6.685%) | (5.224%) | (5.088%) | (11.446%) | (1.520%) |
|
377 |
+
| mDAE w/o | 0.526 ± 0.023 | 1.149 ± 0.015 | 0.780 ± 0.024 | 0.868 ± 0.028 | 0.743 ± 0.230 | 1.018 ± 0.140 | 0.989 ± 0.232 |
|
378 |
+
| overcomplete | (3.137%) | (14.328%) | (8.635%) | (7.960%) | (45.401%) | (22.651%) | (50.304%) |
|
379 |
+
|
380 |
+
Table 4: Mean RMSE of reconstruction (± the standard deviation) for B = 8 random draws of 40% of MCAR artificial missing values.
|
381 |
+
|
382 |
+
Table 5: Mean RMSE of reconstruction (± the standard deviation) for B = 8 random draws of 20% of MAR
|
383 |
+
artificial missing values.
|
384 |
+
|
385 |
+
Table 6: Mean RMSE of reconstruction (± the standard deviation) for B = 8 random draws of 40% of MAR
|
386 |
+
artificial missing values.
|
387 |
+
|
388 |
+
| Method | breast | climate | sonar | iono | seeds | wine | blood |
|
389 |
+
|---------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------|
|
390 |
+
| mDAE | 0.486 ± 0.029 | 1.001 ± 0.006 | 0.684 ± 0.013 | 0.829 ± 0.027 | 0.503 ± 0.032 | 0.805 ± 0.033 | 0.738 ± 0.176 |
|
391 |
+
| mDAE w/o | 0.795 ± 0.048 | 1.000 ± 0.008 | 1.005 ± 0.019 | 0.890 ± 0.033 | 0.682 ± 0.084 | 0.864 ± 0.043 | 0.943 ± 0.248 |
|
392 |
+
| modified loss | (63.580%) | (-0.100%) | (46.930%) | (7.358%) | (35.586%) | (7.329%) | (27.778%) |
|
393 |
+
| mDAE w/o | 0.518 ± 0.050 | 1.024 ± 0.011 | 0.698 ± 0.030 | 0.836 ± 0.021 | 0.531 ± 0.025 | 0.839 ± 0.076 | 0.772 ± 0.213 |
|
394 |
+
| optimal µ | (6.584%) | (2.298%) | (2.047%) | (0.844%) | (5.567%) | (4.224%) | (4.607%) |
|
395 |
+
| mDAE w/o | 0.521 ± 0.025 | 1.156 ± 0.016 | 0.742 ± 0.028 | 0.893 ± 0.055 | 0.686 ± 0.229 | 0.965 ± 0.091 | 0.950 ± 0.288 |
|
396 |
+
| overcomplete | (7.202%) | (15.485%) | (8.480%) | (7.720%) | (36.382%) | (19.876%) | (28.726%) |
|
397 |
+
|
398 |
+
| Method | breast | climate | sonar | iono | seeds | wine | blood |
|
399 |
+
|---------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------|
|
400 |
+
| mDAE | 0.543 ± 0.030 | 1.005 ± 0.006 | 0.738 ± 0.018 | 0.798 ± 0.018 | 0.524 ± 0.031 | 0.873 ± 0.042 | 0.737 ± 0.102 |
|
401 |
+
| mDAE w/o | 0.866 ± 0.043 | 1.007 ± 0.007 | 0.998 ± 0.016 | 0.893 ± 0.011 | 0.800 ± 0.039 | 0.950 ± 0.047 | 0.885 ± 0.109 |
|
402 |
+
| modified loss | (59.484%) | (0.199%) | (35.230%) | (11.905%) | (52.672%) | (8.820%) | (20.081%) |
|
403 |
+
| mDAE w/o | 0.574 ± 0.048 | 1.016 ± 0.012 | 0.750 ± 0.039 | 0.830 ± 0.035 | 0.565 ± 0.051 | 0.922 ± 0.039 | 0.750 ± 0.115 |
|
404 |
+
| optimal µ | (5.709%) | (1.095%) | (1.626%) | (4.010%) | (7.824%) | (5.613%) | (1.764%) |
|
405 |
+
| mDAE w/o | 0.559 ± 0.024 | 1.158 ± 0.013 | 0.796 ± 0.025 | 0.859 ± 0.020 | 0.827 ± 0.236 | 1.000 ± 0.034 | 0.927 ± 0.082 |
|
406 |
+
| overcomplete | (2.947%) | (15.224%) | (7.859%) | (7.644%) | (57.824%) | (14.548%) | (25.780%) |
|
407 |
+
|
408 |
+
Table 7: Mean RMSE of reconstruction (± the standard deviation) for B = 8 random draws of 20% of MNAR artificial missing values.
|
409 |
+
|
410 |
+
Table 8: Mean RMSE of reconstruction (± the standard deviation) for B = 8 random draws of 40% of MNAR artificial missing values.
|
411 |
+
B
|
412 |
+
|
413 |
+
## Appendix
|
414 |
+
|
415 |
+
![15_image_0.png](15_image_0.png)
|
416 |
+
|
417 |
+
Figure 9: Mean Distance to the Best (MDB) obtained with 40% of MCAR artificial missing values.
|
418 |
+
|
419 |
+
![15_image_1.png](15_image_1.png)
|
420 |
+
|
421 |
+
្ត
|
422 |
+
រាជារ
|
423 |
+
Figure 10: Mean Distance to the Best (MDB) obtained with MAR artificial missing values.
|
424 |
+
|
425 |
+
![15_image_2.png](15_image_2.png)
|
426 |
+
|
427 |
+
់
|
428 |
+
រាជារ
|
429 |
+
Figure 11: Mean Distance to the Best (MDB) obtained with MNAR artificial missing values.
|
430 |
+
|
431 |
+
![15_image_3.png](15_image_3.png)
|
432 |
+
|
433 |
+
MDB
|
434 |
+
Figure 12: Mean Distance to the Best (MDB) obtained with 10% MCAR artificial missing values.
|
435 |
+
|
436 |
+
![16_image_0.png](16_image_0.png)
|
437 |
+
|
438 |
+
Figure 13
|
Ofi4f77DoK/Ofi4f77DoK_meta.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"languages": null,
|
3 |
+
"filetype": "pdf",
|
4 |
+
"toc": [],
|
5 |
+
"pages": 17,
|
6 |
+
"ocr_stats": {
|
7 |
+
"ocr_pages": 2,
|
8 |
+
"ocr_failed": 0,
|
9 |
+
"ocr_success": 2,
|
10 |
+
"ocr_engine": "surya"
|
11 |
+
},
|
12 |
+
"block_stats": {
|
13 |
+
"header_footer": 17,
|
14 |
+
"code": 0,
|
15 |
+
"table": 8,
|
16 |
+
"equations": {
|
17 |
+
"successful_ocr": 19,
|
18 |
+
"unsuccessful_ocr": 6,
|
19 |
+
"equations": 25
|
20 |
+
}
|
21 |
+
},
|
22 |
+
"postprocess_stats": {
|
23 |
+
"edit": {}
|
24 |
+
}
|
25 |
+
}
|