RedTachyon commited on
Commit
970284e
1 Parent(s): 71a31d2

Upload folder using huggingface_hub

Browse files
ii6heihoen/10_image_0.png ADDED

Git LFS Details

  • SHA256: a12f70fa615b7aeb4109be2320be566df61748be337c6d3b89f992bd474f0032
  • Pointer size: 130 Bytes
  • Size of remote file: 15.2 kB
ii6heihoen/11_image_0.png ADDED

Git LFS Details

  • SHA256: e4cb058e1f51ee5a536f7bae46bc0fda144d74dedeb8b216196b9db1476d1216
  • Pointer size: 130 Bytes
  • Size of remote file: 24.6 kB
ii6heihoen/11_image_1.png ADDED

Git LFS Details

  • SHA256: 9181782c660fb42aa6c1502d0820b39580f2f8a01e971805895d3f0f85fbbf1e
  • Pointer size: 130 Bytes
  • Size of remote file: 16.7 kB
ii6heihoen/17_image_0.png ADDED

Git LFS Details

  • SHA256: dc0a2a3046521d08f93a5bc649e6933ce25b445463869870ae5cbe9ef6645abb
  • Pointer size: 130 Bytes
  • Size of remote file: 50 kB
ii6heihoen/4_image_0.png ADDED

Git LFS Details

  • SHA256: 109c45a9576e5bd03edfca2b734b9bc480d5b69d2ccc5992d57d47f5d0a1a24a
  • Pointer size: 130 Bytes
  • Size of remote file: 64.3 kB
ii6heihoen/8_image_0.png ADDED

Git LFS Details

  • SHA256: 51870e3ba24ad9f851f6bfd5b5bfcfa625c879af016bdc8f12819e5cb6ceed64
  • Pointer size: 131 Bytes
  • Size of remote file: 121 kB
ii6heihoen/9_image_0.png ADDED

Git LFS Details

  • SHA256: a3194b32560bb991b1623e8e4c82b03848faebd36f82807f761a2742b0c68b24
  • Pointer size: 130 Bytes
  • Size of remote file: 31.6 kB
ii6heihoen/ii6heihoen.md ADDED
@@ -0,0 +1,461 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Unsupervised Federated Domain Adaptation For Segmentation Of Mri Images
2
+
3
+ Anonymous authors Paper under double-blind review
4
+
5
+ ## Abstract
6
+
7
+ Automatic semantic segmentation of magnetic resonance imaging (MRI) images using deep neural networks greatly assists in evaluating and planning treatments for various clinical applications. However, training these models is conditioned on the availability of abundant annotated data to implement the supervised learning procedure. Even if we annotate enough data, MRI images display considerable variability due to factors such as differences in patients, MRI scanners, and imaging protocols. This variability necessitates retraining neural networks for each specific application domain, which, in turn, requires manual annotation by expert radiologists for all new domains. To relax the need for persistent data annotation, we develop a method for unsupervised federated domain adaptation using multiple annotated source domains. Our approach enables the transfer of knowledge from several annotated source domains to adapt a model for use in an unannotated target domain. Initially, we ensure that the target domain data shares similar representations with each source domain in a latent embedding space, modeled as the output of a deep encoder, by minimizing the pair-wise distances of the distributions for the target domain and the source domains. We then employ an ensemble approach to leverage the knowledge obtained from all domains.
8
+
9
+ We perform experiments on two datasets to demonstrate our method is effective. Our code is available as a supplement: http//:SupressedforDoubleBlindReview.
10
+
11
+ ## 1 Introduction
12
+
13
+ Semantic segmentation of MRI images can help to detect anatomical structures or regions of interest in these images to simplify the interpretation of these images. High-quality segmented images are extremely useful in applications such as disease detection and monitoring Hatamizadeh et al. (2021); Karayegen & Aksahin (2021), surgical Guidance Jolesz et al. (2001); Wei et al. (2022), treatment response assessment Kickingereder et al. (2019), and AI-aided diagnosis Arsalan et al. (2019). UNet-based convolutional neural networks (CNNs)
14
+ have shown to be effective for automatic semantic segmentation of MRI images Pravitasari et al. (2020); Maji et al. (2022), but their adoption in clinical settings has been quite limited. A major reason for this limitation is that training deep neural networks requires large annotated datasets Yang et al. (2021); Wang et al. (2021).
15
+
16
+ Annotating MRI data reliably requires the expertise of trained radiologists and physicians which makes it a challenging process. Moreover, using crowdsourcing annotation platforms would be inapplicable because medical data is normally distributed in different institutions and due to the lack of specialized knowledge by an average person. Moreover, medical data is private and sharing it for annotation is highly regulated.
17
+
18
+ Even if we prepare a suitable annotated datasets and successfully train a segmentation model, it may not generalize well in practice. The reason is that MRI images are known to be significantly variable due to differences in patients, MRI scanners, and imaging protocols Kruggel et al. (2010); Ackaouy et al. (2020).
19
+
20
+ These variances introduce domain shift during testing Sankaranarayanan et al. (2018b) which leads to model performance degradation Xu et al. (2019). Annotating data persistently and then retraining the model from scratch may address this challenge but is an inefficient solution. Unsupervised Domain Adaptation (UDA) is a framework that has been developed to tackle the issue of domain shift without requiring data annotation.
21
+
22
+ The goal in UDA is to enable the generalization of a model which is trained on a source domain with annotated data to a target domain with only unannotated data Biasetton et al. (2019); Zou et al. (2018).
23
+
24
+ A major approach to address UDA is to map data points from a source and a target domain into a shared latent embedding space at which the two distributions are aligned. Since domain-shift would not exist in such a latent feature space, a segmentation model which is trained solely using the source domain data and receives latent features at its input, would generalize on the target domain data. The major approach to implement this idea is to model the data mapping function using a deep neural encoder network, where its output-space models the shared latent space. The encoder is trained such that it aligns the source and the target distributions at its output. This process can be achieved using adversarial learning (Javanmardi &
25
+ Tasdizen (2018); Cui et al. (2021); Sun et al. (2022)) or direct probability matching (Bhushan Damodaran et al. (2018); Ackaouy et al. (2020); Al Chanti & Mateus (2021)). In the former approach, the distributions are matched indirectly through competing generator and discriminator networks to learn a domain-agnostic embedding at the output of the generator. In the latter approach, a probability metric is selected and minimized to align the distributions directly in the latent embedding space.
26
+
27
+ Most UDA methods utilize a single source domain for knowledge transfer. However, we may have access to several source domains. Specifically, medical data is usually distributed in different institutions and often we can find several source domains. For this reason, classic UDA has been extended to multi-source UDA
28
+ (MSUDA), where the goal is to benefit from multiple distinct sources of knowledge Zhao et al. (2019); Tasar et al. (2020); Gong et al. (2021); He et al. (2021). The possibility of leveraging collective information from multiple annotated source domains can enhance model generalization compared to single-source UDA. Unlike single-source UDA, MSUDA algorithms need to consider the differences in data distribution between pairs of source domains in addition to the disparities between a single source domain and the target domain. A naive approach to address MSUDA is to assume that the annotated source datasets can be transferred to a central server and then processed similar to single-source UDA. However, this assumption overlooks potential common constraints in medical domain problems such as privacy and security regulations. These regulations often prevent sharing data across the source domains. To overcome these challenges, our contribution is an alternative two-step MSUDA algorithm: (1) we first train a model based single-source UDA between each source domain and the target domain. We rely on direct probability metric minimization for this purpose.
29
+
30
+ (2) During the testing time on the target domain, we use these models individually to segment an image and then aggregate the resulting segmented images according to the confidence we have in each model in a pixel-wise manner. As a result, we maintain the privacy constraints between the source domains and improve upon single-source UDA algorithms. Moreover, we implemented ensemble multi-source UDA techniques on the MICCAI 2016 and 2019 CHAOS MR datasets to show that our two-step MSUDA algorithm is robust across these medical image segmentation datasets and leads to state-of-the-art performance.
31
+
32
+ ## 2 Related Work
33
+
34
+ Semantic Segmentation of MRI Data Semantic segmentation of MRI images helps to increase the clarity and interpretability of these images Işın et al. (2016). While this task is often performed manually by radiologists in clinical settings, manual annotations is prone to inter-reader variations, expensive, and time-consuming. To address these limitations, classical machine learning algorithms have been used to automate segmenting MRI scans Levinski et al. (2009); Liu & Guo (2015); Carreira et al. (2012); Sourin et al. (2010). However, these algorithms rely on hand-crafted features which require expertise in engineering and medicine, and careful creation of imaging features given a specific problem of interest. Additionally, anatomical variations, variations in MRI acquisition settings and scanners, imperfections in image acquisition, and variations in pathology appearance serve as obstacles for their generalization in clinical settings. Deep learning models have the capacity to relax the need for feature engineering. Specifically, architectures based on convolutional neural networks (CNNs) have been found quite effective in medical semantic segmentation Long et al. (2015a); Ronneberger et al. (2015a); Du et al. (2020). Fully Convolutional Networks
35
+ (FCNs) Du et al. (2020) extend the vanilla CNN architecture to an end-to-end model for pixel-wise prediction which is more suitable for semantic segmentation. FCNs have an encoder-decoder structure, where the core idea is to replace fully connected layers of a CNN with up-sampling layers that map back the features that are extracted by the convolutional layers to the original input space dimension. This way, the model can be trained to predict the semantic masks directly at its output. As an extension to FCNs, U-Nets Ronneberger et al. (2015a) are the dominant architecture for medical semantic segmentation tasks. U-Nets are similar to FCNs, but skip connections between the encoder and decoder layers are used to preserve spatial information at all abstraction levels. For this reason, the number of down-sampling and up-sampling layers are equal in a U-Net to make adding skip connections between pairs of layers that have the same hierarchy possible. Similar to CNNs, skip connections help propagating the spatial information is in deeper layers of U-Nets which helps to have accurate segmentation results through using features with different abstraction levels.
36
+
37
+ The downside of U-Nets is the necessity of having large annotated datasets to train them. Single-Source UDA are developed to relax the need for persistent data annotation and improve model generalization using solely unannotated data. These methods utilize only one source domain with annotated data to adapt a model to generalize on the unannotated target domain. The notion of domain is subjective and can be even defined for the same problem if a condition is changed during the model testing phase.
38
+
39
+ UDA methods have been used extensively on the two areas of image classification Goodfellow et al. (2014);
40
+ Hoffman et al. (2018); Dhouib et al. (2020); Luc et al. (2016); Tzeng et al. (2017); Sankaranarayanan et al. (2018a); Long et al. (2015b; 2017); Morerio et al. (2018) and image segmentation Javanmardi &
41
+ Tasdizen (2018); Cui et al. (2021); Sun et al. (2022); Bhushan Damodaran et al. (2018); Ackaouy et al.
42
+
43
+ (2020); Al Chanti & Mateus (2021). The classic workflow in UDA is to train a deep neural network on both the annotated source domain and the unannotated target domain such that the end-to-end learning is supervised by the source domain data and domain alignment is realized in a network hidden layer as a latent embedding space using data from both domains. As a result, the network would generalize on the target domain. The alignment of the distributions for UDA is often achieved by utilizing generative adversarial networks Goodfellow et al. (2014); Hoffman et al. (2018); Dhouib et al. (2020); Javanmardi &
44
+ Tasdizen (2018); Cui et al. (2021); Sun et al. (2022) or probability metric minimization Long et al. (2015b; 2017); Morerio et al. (2018); Bhushan Damodaran et al. (2018); Ackaouy et al. (2020); Al Chanti & Mateus
45
+ (2021). Adversarial learning aligns two distributions indirectly at the output of the generative subnetwork.
46
+
47
+ For metric minimization, we minimize a suitable probability metric between the embeddings of the source and target domains Long et al. (2015b; 2017); Morerio et al. (2018); Bhushan Damodaran et al. (2018); Ackaouy et al. (2020); Al Chanti & Mateus (2021); Rostami et al. (2020) and minimize it at the output of a shared encoder for direct distribution alignment. The upside of this approach is that it requires less hyperparameter tuning. However, single-Source UDA algorithms do not leverage inter-domain statistics when multiple source domains are present. Therefore, extending single-source UDA algorithms to a multisource federated setting is a non-trivial task that requires careful consideration to mitigate the negative effect of distribution mismatches between several source domains.
48
+
49
+ Multi-Source UDA is an extension to single-source UDA to benefit from multiple source domains to enhance the model generalization on a single target domain Xu et al. (2018); Zhao et al. (2019); Tasar et al.
50
+
51
+ (2020); Gong et al. (2021). MSUDA is a more challenging problem due to variances across the source domains.
52
+
53
+ Xu et al. 2018 extended adversarial learning to MSUDA by first reducing the difference between source and target domains using multi-way adversarial learning and then integrating the corresponding category classifiers. Zhao et al. Zhao et al. (2019) extend this idea by introducing dynamic semantic consistency in addition to using the pixel-level cycle-consistently towards the target domain. StandardGAN Tasar et al.
54
+
55
+ (2020) relies on adversarial learning but it standardizes data for each source and target domains so that all domains share similar distributions to reduce the adverse effect of variances. Peng et al. 2019a align interdomain statistics of source domains in an embedding space to mitigate the effect of domain shift between the source domains. Guo et al. 2018 adopt a meta-learning approach to combine domain-specific predictions, while Venkat et al. Venkat et al. (2020) use pseudo-labels to improve domain alignment. Note that having more source data in the MUDA setting does not necessarily lead to improved performance compared to single-source UDA because negative transfer, where adaptation from one domain hinders performance in another, can degrade the performance compared to using single-source UDA. Li et al. 2018 leverage domain similarity to avoid negative transfer by utilizing model statistics in a shared embedding space. Zhu et al. 2019 align deep networks at different abstraction levels to achieve domain alignment. Wen et al. 2020 introduce a discriminator to exclude data samples with a negative impact on generalization. Zhao et al. 2020 align target features with source-trained features using optimal transport and combine source domains proportionally based on the optimal transport distance. mDALUA Gong et al. (2021) address the effect of negative transfer using domain attention, uncertainty maximization, and attention-guided adversarial alignment. Federated Learning (FL) is a distributed learning technique where the goal is to train distinct models in a decentralized manner when the data is distributed. Instead of the naive solution of transmitting raw data, the prediction by models are fused to preserve data privacy. The core idea in FL is to learn a model using the local data while not sharing data and then use the individual model predictions to come up with an improved collective performance. As a result, data privacy is perserved and we can benefit from distributed computational resources. While works on FL for supervised learning are extensive Bai et al. (2021); Niu &
56
+ Deng (2022); Wicaksana et al. (2022), its application in UDA remains mostly unexplored. There are several FL studies on UDA, but these methods are primarily designed for classification tasks Peng et al. (2019b);
57
+ Song et al. (2020). In our work, we develop a federated multi-source UDA algorithm.
58
+
59
+ ## 3 Problem Formulation
60
+
61
+ Our focus in this work is to train a segmentation model for a target domain with the data distribution T ,
62
+ where only unannotated images are accessible, i.e., we observe unannotated samples DT = {x t 1
63
+ , . . . , x t nt }
64
+ from the target domain distribution T . Each data point x ∈ RW×H×C in the input space is an W × H × C
65
+ MRI image, where *W, H,* and C denote the width, height, and the number of channels for the image. The goal is to segment an input image into semantic classes which are clinically meaningful, e.g., different organs in a frame. Since training a segmentation model with unannotated images is an ill-posed problem, we consider that we also have access to N distinct domains with the data distributions S1, S2 *. . .* SN , where annotated segmented images are accessible in each domain, i.e., we have access to the annotated samples DS
66
+ k = {(x s k,1
67
+ , y s k,1
68
+ )*, . . . ,*(x s k,nsk
69
+ , y s k,nsk
70
+ )}, where x s k ∼ Sk and ∀*i, j* : Si ̸= Sj , Si ̸= T . Each point y in the output space is a semantic mask with the same size of the input MRI images, prepared by a medical professional. We consider a segmentation model fθ(·) : RW×H×C → R
71
+ |Y| with learnable parameters θ, e.g.,
72
+ 3D U-Net Ahmad et al. (2021), that should be trained to map the input image into a semantic mask, where |Y| is the number of shared semantic classes across the domains, determined by clinicians according to a specific problem. It is crucial to note that the semantic classes are the same classes across all the domains. To train a generalizable segmentation model with a single source domain, we can rely on the common approach of UDA, where we adapt a source-trained model to generalize better on the target domain. To this end, we can first train the segmentation model for the single source domain. This is a straightforward task which can be performed using empirical risk minimization (ERM) on the corresponding annotated dataset:
73
+
74
+ $$\theta_{k}=\arg\operatorname*{min}_{\theta}{\mathcal{L}}_{S L}(f_{\theta},{\mathcal{D}}_{k}^{S})=\arg\operatorname*{min}_{\theta}{\frac{1}{n_{k}^{s}}}\sum_{i=1}^{n_{k}^{s}}{\mathcal{L}}_{c c}(f_{\theta}({\boldsymbol{x}}_{k,i}^{s}),{\boldsymbol{y}}_{k,i}),$$
75
+ $$(1)$$
76
+
77
+ where Lce denotes the cross-entropy loss. Because the target and source domains share the same semantic classes, the source-trained model can be directly used on the target. However, its performance will degrade on the target domain because of the distributional differences between the source domains and the target domain, i.e., because Sk ̸= T . The goal in single-source UDA is to leverage the target domain unannotated dataset and the source-trained model and adapt the model to have an enhanced generalization on the target domain.
78
+
79
+ The common strategy for this purpose is to map the data points from the source and the target domains into a shared embedding space in which distributional differences are minimized. To model this process, we consider that the base model fθ can be decomposed into an encoder subnetwork gu(·) : RW×H×C → R
80
+ dZ and a classifier subnetwork hv(·) : R
81
+ dZ → R
82
+ |Y| with learnable parameters u and v, where f(·) = (h ◦ g)(·) and θ = (u, v). In this formulation, the output-space of the encoder subnetwork models a latent embedding space with dimension dZ. In a single-source UDA setting, we select a distributional discrepancy metric D(·, ·) to define a cross-domain loss function and train the encoder by minimizing the selected metric. As a result, the distributions of both domains become similar in the latent space and hence the source-trained classifier subnetwork hk(·) will generalize on the target domain T . Many UDA methods have been developed using this approach and we base our method for multi-source UDA on this solution for each of the source domains.
83
+
84
+ To address a multi-source UDA setting, a naive solution is to gather data from all source domains centrally and create a single global source dataset, and then use single-source UDA. However, this approach is not practical in medical domains due to strict regulations and concerns about data privacy and security which prevent sharing data across different source domains. Even if data sharing were permitted, another major challenge arises from the significant differences in data distributions and characteristics among source domains. These differences can lead to negative knowledge transfer across the domains Wang et al. (2019).
85
+
86
+ Negative knowledge transfer can occur because information from some source domains might be irrelevant or even harmful to the performance on the target domain. Additionally, the data from different source domains could interfere with each other, further complicating the learning process. These issues create a situation where finding a common representation that works effectively for all the source domains becomes challenging. To address these challenges, our approach of choice is ensemble learning. Ensemble learning involves combining multiple models or learners to improve overall collective performance. In the context of multi-source UDA, the idea is to develop individual single-source UDA models using each source domain and then leverage the strengths of these models and combine their predictions to make a final prediction.
87
+
88
+ 4 Proposed Multi-Source UDA Algorithm
89
+
90
+ ![4_image_0.png](4_image_0.png)
91
+
92
+ Figure 1: Block-diagram of the proposed multi-UDA approach: (a) we train source-specific models for each source domain based on ERM. (b) we perform single-source UDA for adapting each source-trained model via distributional alignment in the shared embedding space (c) we aggregate the individual source-trained model predictions to make the final prediction on the target domain predictions according to their reliability. As illustrated in Figure 1, we follow a two-stage procedure to address multi-source UDA with MRI data. We first solve N single-source UDA problems, each for one of the source domains. We then benefit from an ensemble of these distinct models. To align the target distribution with a source domain distribution, we use the Sliced Wasserstein Distance (SWD) because it a suitable metric for deep learning optimization. Because SWD has the nice property of having non-vanishing gradients Rabin et al. (2011). Moreover, SWD can be computed using a closed-form solution from the empirical samples of two distributions:
93
+
94
+ $$W_{2}(g(T),g(S_{k}))=\frac{1}{L}\sum_{l=1}^{L}\left|\left\langle g(x_{l_{i}}^{t}),\phi_{l}\right\rangle\right\rangle-\left\langle g(x_{k,l_{i}}^{t},\phi_{l})\right|^{2}\tag{2}$$
95
+
96
+ where ϕl ∈ S
97
+ dz−1is a 1D projection direction which is drawn uniformly in a random manner from the unit ball S
98
+ dz−1. Also, il and jl denote the sorted indices of {g(xi)· ϕl}M
99
+ i=1 for the source and the target domains.
100
+
101
+ We then solve the following optimization problem to adapt the model obtained from solving Eq. (1):
102
+
103
+ $$\operatorname*{min}_{\theta}{\mathcal{L}}_{S L}(f_{\theta},{\mathcal{D}}_{k}^{S})+\gamma W_{2}(g_{u}({\mathcal{D}}_{k}^{S}),g_{u}({\mathcal{D}}^{T}))$$
104
+ T)) (3)
105
+ where γ is a regularization parameter. The first term enforces the embedding space to remain discriminative and the second term aligns the two distributions in the embedding space.
106
+
107
+ ring Eq. (1): $\mathit{}$
108
+ $$\left({\mathfrak{3}}\right)$$
109
+ After completing the adaptation process for each source domain, each model can generate a distinct mask on the target domain images. The key question in multi-source UDA is to obtain a solution that is better than these single-source UDA solutions. We obtain the final model predictions for the target domain by combining the probabilistic predictions from all N adapted models. We combine the model predictions in a pixel-wise manner Pn i=1 wifθi using mixing weights w = (w1, w2*, . . . , w*n), where 0 ≥ wi ≥ 1 and fθi represents the adapted model corresponding to the i th source domain. This aggregation process allows for benefiting from the source models without sharing data across the source domains. Choosing the appropriate weight values is the key remaining challenge. We need to assign the weights such that the models that do not generalize well could not adversely impact the quality of the aggregated segmentation mask. To address this concern, we employ the concept of *prediction confidence* of the source model on the target domain as a proxy for the model generalization capability. To this end, we evaluate how confident the source model is when making predictions on the target domain and consider the measured confidence in the aggregation process.
110
+
111
+ Intuitively, we reduce the contribution of less certain predictions. The rationale behind using prediction confidence as a basis for weight assignment is supported by empirical evidence, which we have presented in Section 5. We set a confidence threshold denoted as λ, tuned empirically, and compute the weight as follows:
112
+
113
+ $$\tilde{w}_{k}\sim\sum_{i=1}^{n^{t}}\Im(\operatorname*{max}\tilde{f}_{\theta_{k}}(\mathbf{x}_{i}^{t})>\lambda),\quad\ w_{k}=\tilde{w}_{k}/\sum\tilde{w}_{k},$$
114
+ i ) > λ), wk = ˜wk/Xw˜k, (4)
115
+ where ˜f(·) denotes the model output just prior to the final SoftMax layer. This output can be considered a probability distribution which measures certainty well. If the prediction confidence of the k th model exceeds λ, we assign wk to be a non-zero value to incorporate the predictions from that model into the final prediction process. However, if the prediction confidence falls below the threshold, we assign wk to be zero.
116
+
117
+ Note that we maintain data privacy during the initial stages of pretraining and adaptation by ensuring that data samples are not shared between any two source domains. When we aggregate the predictions of the resulting models, we do not need the source data at all. As a result, our approach is applicable to medical domains when the source datasets are distributed across multiple entities. Our approach also allows for benefiting from new source domains as new domains become available without requiring retraining the models from scratch. To this end, we only need to solve new single-source UDA problems. We then update the normalized mixing weights to benefit from the new domain to continually enhance the segmentation accuracy. The update process is efficient and incurs negligible runtime compared to the actual model training, offering an FL solution.
118
+
119
+ Our proposed approach is named "Federated Multi-Source UDA" (FMUDA), presented in Algorithm 1.
120
+
121
+ $$\left(4\right)$$
122
+
123
+ Algorithm 1 Federated Multi-Source Unsupervised Domain Adaptation 1: **procedure** Single-Source UDA(Si, T)
124
+ 2: Pre-train fθi by minimizing the supervised learning loss on Si 3: Update fθi on the target domain T by solving Eq. (3)
125
+ 4: **return** fθi 5: **procedure** Ensemble(x, fθi
126
+ , )
127
+ 6: Compute the mixing weights wi using Eq. (4)
128
+ 7: Compute the eventual segmentation mask M
129
+ by computing the weighted average of the singlesource UDA masks Mi: M ←
130
+ PN
131
+ i=1 P
132
+ wiMi N i=1 wi 8: **return** M
133
+
134
+ ## 5 Experimental Validation 5.1 Experimental Setup
135
+
136
+ Datasets: we used the following two datasets in our experiments. MICCAI 2016 MS lesion segmentation challenge dataset Commowick et al. (2021): this dataset contains MRI images from patients suffering from Multiple Sclerosis in which images contain hyperintense lesions on FLAIR. The dataset incorporates images from different clinical sites, each employing a different model of MRI scanner. Each site can be naturally modeled as a domain for form a multi-source UDA setting.
137
+
138
+ In our experiments, we assume that each site has contributed images from five patients for training and ten patients for testing. The dataset is divided into training and a testing image sets. Each patient's data includes high-quality segmentation maps derived from averaging manual annotations by seven independent manual segmentation by expert radiologists. These maps present an invaluable resource for our experimentation, offering the possibility of evaluation against gold standard used in clinical settings. 2019 CHAOS MR Dataset Kavur et al. (2021): This dataset consists of MR and CT scans with segmentation maps for abdominal organs such as the liver, right kidney, left kidney, and spleen. In total, 20 MR scans are obtained. We split the dataset randomly into three sites (01,02, and 03) to report results in a multi-source UDA setting. The images were re-sampled to an axial view size of 256×256. The background was then cropped such that the distance between any labeled pixel and the image borders is at least 30 pixels, and scans were again resized to 256×256. Each 3D scan was normalized independently to zero mean and unit variance, and values more than three standard deviation from the mean were clipped. Data augmentation was performed on both the training MR and training CT instances using (1) random rotations of up to 20 degrees, (2) negating pixel values, (3) adding random Gaussian noise, and (4) random cropping.
139
+
140
+ Preprocessing & Network Architecture: To maintain the integrity of our experiments, we have strictly used the test images solely for the testing phase, ensuring they were not used in any part of the training, validation, or adaptation processes. Following the literature on the MICCAI 2016 MS lesion segmentation challenge, we subjected the raw MRI images to several preliminary pre-processing procedures prior to using them as inputs for the segmentation network for enhanced performance. The procedures for each patient included (i) denoising of MRI images using the non-local means algorithm Coupé et al. (2008), (ii) rigid registration in relation to the FLAIR modality, performed to preserve the relative distance between every pair of points from the patient's anatomy to achieve correspondence, (iii) skull-stripping to remove the skull and non-brain tissues from the MRI images that are irrelevant to the task, and (iv) bias correction to reduce variance across the image. To accomplish these steps, we utilized Anima 1, a publicly accessible toolkit for medical image processing developed by the Empenn research team at Inria Rennes2. We employed a 3D-UNet architecture Isensee et al. (2018) as our segmentation model (please refer to the Appendices for the detailed architecture visualization) which is an improved version of the original UNet architecture Ronneberger et al. (2015b) to benefit from spatial dependencies in all directions. To ensure uniformity across the dataset, images were resampled to share a consistent size of 128 × 128 × 128. From these images, 3D patches of size 16 × 16 × 16 were extracted with a patch overlap of 50%, resulting in a total of 4,096 patches per image. Although using overlapping 3D patches contain more surrounding information for a voxel which in turn is memory demanding, but training on patches containing lesions allowed us to reduce training time because the inputs become smaller while simultaneously addressing the issue of class imbalance. Evaluation: Following the literature, we used the DICE score to measure the similarity between the generated results and the provided ground truth masks. It is a full reference measured defined as 2·|X∩Y | |X|+|Y |
141
+ ,
142
+ where X and Y are the segmentation masks of the predicted and ground truth images, respectively. The DICE score ranges from 0 to 1, where a score of 1 indicates perfect overlap and 0 signifies no overlap. This metric is particularly suitable for evaluating segmentation tasks, as it quantifies how well the segmented regions match the ground truth, accounting for both false positives and false negative scenarios. We repeated our experiments five times and reported the average performance. Baselines for Comparison: There are not many prior works in the literature on the problem we explored.
143
+
144
+ To provide a comprehensive evaluation of the proposed method and measure its competitiveness, we have set up a series of comparative baselines. These baselines have been selected not only to represent standard and popular strategies in image adaptation and prediction but also to highlight the uniqueness and advantages of our approach. Additionally, some of these baselines serve as ablative experiments that demonstrate all components of our algorithm are important for optimal performance. We use four baselines to compare with our methods: (i) **Source-Trained Model (SUDA)**: It represents the performance of the best trained model using single-source UDA for target domain. This baselines serves as an ablative experiment because improvements over this baseline demonstrate the effectiveness of using multi-source UDA. (ii) Popular Voting (PV): It represents assigning the label for each pixel based on the majority votes of the individual single-source adapted models. When the votes are equal, we assign the label randomly. Majority voting 1https://anima.irisa.fr/
145
+ considers all the models to be used equally. Improvements over this baseline demonstrate the effectiveness of our ensemble technique because it is the simplest idea that comes to our mind. (iii) **Averaging (AV)**: Under this baseline, prediction image results from taking the average prediction of the single-source adapted models. This method can be particularly useful when the predictions are continuous or when there's the same amount of uncertainty in individual model predictions. This baseline can also serve as an ablative experiments because improvements over this baseline demonstrate that treating all source domains equally and using uniform combination weights is not an optimal strategy (iv) **SegJDOT** Ackaouy et al. (2020):
146
+ to the best of our knowledge, this is the only prior comparable method in the literature that addresses multi-source UDA for semantic segmentation of MRI images. There are other multi-source UDA techniques but those methods are developed for classification tasks and adopting them for semantic segmentation is not trivial. This baseline is a multi-source UDA method which uses a different strategy to fuse information from several source domains based on re-weighting the adaptation loss for each single-source UDA problem alignment loss function and tuning the weights for optimal multi-source performance. A benefit that our approach offers compared to SegJDOT is that we do not need simultaneous access to all source domain data.
147
+
148
+ | Method | → 07 | |
149
+ |-------------------------|-------------------------|-------|
150
+ | SUDA | 0.199 | |
151
+ | PV | 0.022 | |
152
+ | AV | 0.103 | |
153
+ | SegJDOT | 0.315 | |
154
+ | FMUDA | 0.407 | |
155
+ | (a) Source 1 & Source 8 | Method | → 08 |
156
+ | SUDA | 0.249 | |
157
+ | PV | 0.152 | |
158
+ | AV | 0.068 | |
159
+ | SegJDOT | 0.418 | |
160
+ | FMUDA | 0.395 | |
161
+ | (b) Source 1 & Source 7 | Method | → 01 |
162
+ | | SUDA | 0.101 |
163
+ | | PV | 0.017 |
164
+ | | AV | 0.029 |
165
+ | | SegJDOT | 0.385 |
166
+ | | FMUDA | 0.411 |
167
+ | | (c) Source 7 & Source 8 | |
168
+
169
+ ## 5.2 Comparative And Ablative Experiments
170
+
171
+ Table 1: Performance comparison (in terms of DICE metric) for multi-source UDA problems defined on the MICCAI 2016 MS lesion segmentation challenge dataset.
172
+
173
+ Table 1 provides an overview of our comparative results. We have provided results for all the three possible multi-source UDA problems, wherein each instance involves designating two domains as source domains and the third domain in the dataset as the target domain. We report the downstream performance on the target domain for each UDA problem in Table 1. We have followed the original dataset to use "01", "07", and "08" to refer to the domains (sources) in the dataset. Upon careful examination, it is evident FMUDA stands out by delivering state-of-the-art (SOTA) performance across all the three multi-source UDA tasks. Particularly, improvements over SUDA is significant which demonstrate the advantage of our approach. A notable finding is also the substantial performance gap between FMUDA and PV or AV.
174
+
175
+ This discrepancy serves as compelling evidence for the effectiveness and indispensability of our ensemble approach in ensuring superior model performance. It emphasizes that the careful integration of information from multiple source domains, as facilitated by FMUDA, contributes significantly to overall multi-domain UDA successful strategy. The comparison between PV and AV against SUDA reveals that multi-source UDA is not inherently a superior method when aggregation is not executed properly. PV and AV exhibit underperformance in comparison to SUDA, emphasizing the importance of a well-crafted aggregation strategy in realizing the potential benefits of multi-source UDA to mitigate the effect of negative knowledge transfer. Underperformance compared SUDA suggests that interference between source domains is a major challenge that needs to be addressed in multi-source UDA. SegJDOT addresses this challenge and exhibits a better performance but not as good as FMUDA. We think this superiority stems from the fact that FMUDA uses distinct models for each source domain. In summary, our findings suggest that our FMUDA is not only a competitive method but also compares favorably against alternative methods.
176
+
177
+ Table 2 provides comparative results using the 2019 CHAOS Grad Challenge dataset, where we tested our method in a multi-source UDA setting. We observe that our approach outperforms the other baselines with a considerable margin. The results emphasized that in the multi-source situation, our ensemble method is able to improve the image segmentation performance using the knowledge trained in multiple models.
178
+
179
+ | Method | → 03 | | |
180
+ |----------|------------------------------------------------------|--------|-------|
181
+ | SUDA | 0.523 | | |
182
+ | PV | 0.159 | | |
183
+ | AV | 0.310 | | |
184
+ | FMUDA | 0.588 | Method | → 01 |
185
+ | | | SUDA | 0.543 |
186
+ | | | PV | 0.253 |
187
+ | | | AV | 0.377 |
188
+ | | | FMUDA | 0.602 |
189
+ | | Method → 02 SUDA 0.579 PV 0.192 AV 0.328 FMUDA 0.615 | | |
190
+
191
+ Table 2: Performance comparison (in terms of DICE metric) for multi-source UDA problems defined on the
192
+
193
+ ![8_image_0.png](8_image_0.png)
194
+
195
+ 2019 CHAOS Grad challenge dataset (CHAOS MR).
196
+
197
+ Figure 2: Segmentation masks generated for a sample MRI image when Source "01" is used as the source domain in UDA. In each figure, the colored area shows the mask generated by each UDA model.
198
+ To offer a more intuitive comparison and provide a deeper insight about the comparative experiments, Figure 2 showcases segmentation results along with the original segmentation mask of radiologists when Source "01" is served as the target domain and Sources "07" and "08" are used as the UDA source domains. Through inspecting the second and the third columns, we note that the performance of the single-source UDA methods is quite different. While source "08" leads to a decent performance, source "07" does not lead to a good UDA performance. This observation is not surprising because UDA is effective when the source and the target domain share distributional similarities and this example suggests that source "07" is not a good source domain to perform segmentation in on source "01". We can understand why the best single-source UDA method can have a better performance. Additionally, this example demonstrates that as opposed to intuition, using more source domains does not necessarily lead to improved UDA performance due to the possibility of negative knowledge transfer across the domains. In situations in which the source domains are diverse, aggregation techniques such as averaging or majority vote are not going to be very effective because unintentionally we will give a high contribution to the source domains with low-performance when generating the aggregated mask. Hence, it is possible that the aggregated performance is dominated by the worse singleUDA performance. It is even possible to have a performance less than all single-source UDA models when individual single-source UDA domain models lead to inconsistent predictions. Note that majority voting also can fail because the majority of the models can potentially be low-confidence models. In other words, multisource UDA should be performed such that good source domains contribute the most when the aggregated mask is generated. In the absence of such a strategy, the multi-source UDA performance can even lead to a lower performance than single-source UDA. The strength of FMUDA is that, as it can be seen in Figure 2, it can aggregate the generated single-source UDA masks such that the aggregated mask would become better than the mask generated by each of the single-source UDA models. For example, although Source
199
+ "08" model leads to a relatively good performance, it misses to segment two regions in the upper-half of
200
+
201
+ ![9_image_0.png](9_image_0.png)
202
+
203
+ Figure 3: Distribution matching in the embedding space: we use UMAP for visualization of data representations when Source "07" in the dataset is used as the UDA source domain and Source "01" of the dataset is used as the UDA target domain: (Left) source domain; (Center) target domain prior to single-source model adaptation; and (Right) target domain after single-source model adaptation
204
+ the brain image. The multi-source UDA model, can at least partially include these regions using the "07" model. This improvement stems from using the "8" domain which is confident on those regions. To offer an intuitive insight about the way that our approach works, Figure 3 illustrates the affect of domain alignment on the geometry of the data representations in the shared embedding space. In this figure, we have reduced the dimension of data representations in the shared embedding space using UMAP tool McInnes et al. (2018) to two for visualization purpose. In this figure, we showcase the latent embeddings of data points for the source domain (Source "08" in the dataset) and the target domain (Source "01" in the dataset) both before and after adaptation to study the impact of single-source UDA on the geometry of data representations. Each point in the figure corresponds to a pixel. Through careful visual inspection, we see that FMUDA effectively minimizes the distance between the empirical distributions of the target domain and the source domain after adaptation, leading to learning a domain-agnostic embedding space at the outputspace of the encoder. Although the eventual mask is generated by aggregating several models, alignment of single-source UDA distribution pairs can translate into an enhanced collective performance because each model become more confident after performing single-source UDA. This experiment highlights the efficacy of FMUDA in facilitating domain adaptation and improving the overall performance across diverse domains. In addition to the exploration of multi-source UDA setting, we conducted single-domain UDA experiments and compared our results against SegJDOT, showcasing the competitiveness of our proposed approach in this scenario. The results of these experiments are summarized in Table 3, where we present performance results for six distinct pairwise single-source UDA problems defined on the dataset. To ensure a fair evaluation, we aligned the training/testing pairs with those used in SegJDOT. The observation from the results is that our proposed approach consistently outperformed SegJDOT. Notably, when considering the average DICE
205
+ score across these tasks, our approach exhibited a remarkable ≈ 20% improvement over the SegJDOT. This heightened performance is because SegJDOT relies on optimal transport for domain alignment, but our approach leverages SWD for distribution alignment. The inherent characteristics of SWD contribute to the improved adaptability and effectiveness of our method. This experiment demonstrate a second angle of our novelty in using SWD for solving UDA for semantic segmentation. Our proposed is a competitive method for single-source UDA for problems involving semantic segmentation. These results indicate that our improved performance in the case of multi-source UDA also stems from performing single-source UDA better. In Table 4, we present results for a single-source UDA scenario where we have reported results on the CHAOS dataset. Our decision to report results in this setting is motivated by the fact that previous studies have primarily reported their performances in a single-source UDA context. We compare our performance against single-source UDA methods, including, SIFA Chen et al. (2019), CyCADA Hoffman et al. (2018),
206
+ CycleGAN Zhu et al. (2017), SynSeg-Net Huo et al. (2018), and AdaOutput Tsai et al. (2018). Our results show that FMUDA achieves SOTA performance in this setting. We note that achieving SOTA performance in this context is also beneficial for multi-source UDA scenarios. This is because the overall SOTA performance in multi-source UDA depends on the individual models' SOTA performance.
207
+
208
+ | Method | → 07 → 08 Avg. | | | |
209
+ |--------------|------------------|----------------|-------|-------|
210
+ | Pre-Adapt | 0.090 | 0.430 | 0.260 | |
211
+ | SegJDOT | 0.110 | 0.470 | 0.290 | |
212
+ | FMUDA | 0.452 | 0.418 | 0.435 | |
213
+ | (a) Source 1 | Method | → 01 → 08 | Avg. | |
214
+ | Pre-Adapt | 0.430 | 0.390 | 0.410 | |
215
+ | SegJDOT | 0.450 | 0.440 | 0.445 | |
216
+ | FMUDA | 0.484 | 0.442 | 0.463 | |
217
+ | (b) Source 7 | Method | → 01 → 07 Avg. | | |
218
+ | | Pre-Adapt | 0.350 | 0.070 | 0.210 |
219
+ | | SegJDOT | 0.450 | 0.290 | 0.370 |
220
+ | | FMUDA | 0.483 | 0.458 | 0.471 |
221
+ | | (c) Source 8 | | | |
222
+
223
+ | Method | | Dice | | | |
224
+ |-------------|----------|----------|--------|---------|------|
225
+ | Liver | R.Kidney | L.Kidney | Spleen | Average | |
226
+ | Source-Only | 73.1 | 47.3 | 57.3 | 55.1 | 58.2 |
227
+ | Supervised | 94.2 | 87.2 | 88.9 | 89.1 | 89.8 |
228
+ | SynSeg-Net | 85.0 | 82.1 | 72.7 | 81.0 | 80.2 |
229
+ | AdaOutput | 85.4 | 79.7 | 79.7 | 81.7 | 81.6 |
230
+ | CycleGAN | 83.4 | 79.3 | 79.4 | 77.3 | 79.9 |
231
+ | CyCADA | 84.5 | 78.6 | 80.3 | 76.9 | 80.1 |
232
+ | SIFA | 88.0 | 83.3 | 80.9 | 82.6 | 83.7 |
233
+ | FMUDA | 89.1 | 72.5 | 81.4 | 80.4 | 80.9 |
234
+
235
+ Table 3: Performance comparison (in terms of DICE metric) for single-source UDA tasks defined on the MICCAI 2016 MS lesion segmentation challenge dataset. Table 4: Performance comparison (in terms of DICE metric) for single-source UDA problems defined on the 2019 CHAOS Grad challenge dataset (CHAOS MR)
236
+
237
+ ## 5.3 Analytic Experiments
238
+
239
+ In Figure 4, we first study the dynamics of our adaptation strategy on the model performance under the utilization of Source "01" of MICCAI 2016 MS lesion segmentation dataset as the source domain. In this figure, we have visualized the training loss and the target domain performance versus training epochs. We observe a consistent pattern in both domains: pre-training on the source domain consistently enhances performance in the target domains due to cross-domain similarities. Furthermore, a notable uptick in target domain accuracies becomes evident as the adaptation process initiates.
240
+
241
+ ![10_image_0.png](10_image_0.png)
242
+
243
+ Figure 4: Effect of the pretraining and adaptation process on the target domain performance (yellow curve)
244
+ and the training loss (blue curve) for Source "01" of the MICCAL 2016 dataset.
245
+ Finally, we studied the sensitivity of our performance with respect to major hyperparameters that we have using the MICCAI 2916 MS lesion segmentation dataset. We study the effect of the value of confidence parameter *lambda* on the downstream performance. This parameters acts as a threshold to filter out noises in images from multiple site. To this end, we have measured the model performance versus the value of λ on each target domain. Figure 5 presents the results for this study. We observe that the value for this parameter is important and selecting it properly is very important. Based on the observations, we conclude λ = 0.3 is a suitable initial value for this parameter in our experiments. We can also use the validation set to tune this parameter for an optimal performance.
246
+
247
+ ![11_image_0.png](11_image_0.png)
248
+
249
+ Figure 5: Mode Performance versus the value for the hyperparameter λ.
250
+ We also investigate the influence of the SWD projection hyper-parameter, denoted as L in definition of SWD in Equation 2. While a larger value of L results in a more precise approximation of the SWD metric, it also comes with the drawback of increased computational load to compute SWD. Our objective is to determine whether there exists a range of L values that provides satisfactory adaptation performance and to scrutinize the impact of this parameter. To this end, we use two UDA tasks, as illustrated in Figure 6. We present our findings based on a range of L values L ∈ 1, 25, 50, 100, 150, 200, 250. As anticipated, tightening the SWD approximation by increasing the number of projections results in improved performance. However, we observe that beyond a certain threshold, approximately when L ≈ 50, the performance gains become marginal and the algorithm becomes almost insensitive. Consequently, L = 50 is a good choice for this particular hyper-parameter to balance the computational efficiency and adaptation performance.
251
+
252
+ ![11_image_1.png](11_image_1.png)
253
+
254
+ Figure 6: Performance in target domain versus the number of projections used in computing SWD.
255
+
256
+ ## 6 Conclusion
257
+
258
+ We developed a multi-source UDA method for segmentation of medical images, when the source domain images are distributed. Our algorithm is a two-stage algorithm. In the first stage, we use SWD metric to match the distributions of the source and the target domain in a shared embedding space modeled as the output of a shared encoder. As a result, we will have one adapted model per each target-source domain pair. In the second stage, the segmentation masks generated by these models are aggregated based on the reliability of each model to build a final segmentation map that is more accurate than all the individually generated single-source UDA masks. The validity of our algorithm is supported by experimental results on two real-world medical image datasets. Our experiments showcase the competitive performance of our algorithm when compared to SOTA alternatives. Our algorithm also maintains data privacy across the source domains because source domains do not share data. Future endeavors involve exploring scenarios where the data for source domains is fully private and cannot be shared with the target domain. Another limitation of our work is heuristic nature of aggregation which despite being practical, may require further theoretical analysis. We also lack a theoretical approach to tune the hyperparameters which would be beneficial when validation data is lacking and empirical hyperparameter tuning is not feasible.
259
+
260
+ ## References
261
+
262
+ Antoine Ackaouy, Nicolas Courty, Emmanuel Vallée, Olivier Commowick, Christian Barillot, and Francesca Galassi. Unsupervised domain adaptation with optimal transport in multi-site segmentation of multiple sclerosis lesions from mri data. *Frontiers in computational neuroscience*, 14:19, 2020.
263
+
264
+ Parvez Ahmad, Saqib Qamar, Linlin Shen, and Adnan Saeed. Context aware 3d unet for brain tumor segmentation. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Revised Selected Papers, Part I 6, pp. 207–218. Springer, 2021.
265
+
266
+ Dawood Al Chanti and Diana Mateus. Olva: O ptimal l atent v ector a lignment for unsupervised domain adaptation in medical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 261–271. Springer, 2021.
267
+
268
+ Muhammad Arsalan, Muhammad Owais, Tahir Mahmood, Se Woon Cho, and Kang Ryoung Park. Aiding the diagnosis of diabetic and hypertensive retinopathy using artificial intelligence-based semantic segmentation. *Journal of clinical medicine*, 8(9):1446, 2019.
269
+
270
+ Fan Bai, Jiaxiang Wu, Pengcheng Shen, Shaoxin Li, and Shuigeng Zhou. Federated face recognition. arXiv preprint arXiv:2105.02501, 2021.
271
+
272
+ Bharath Bhushan Damodaran, Benjamin Kellenberger, Rémi Flamary, Devis Tuia, and Nicolas Courty.
273
+
274
+ Deepjdot: Deep joint distribution optimal transport for unsupervised domain adaptation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 447–463, 2018.
275
+
276
+ Matteo Biasetton, Umberto Michieli, Gianluca Agresti, and Pietro Zanuttigh. Unsupervised domain adaptation for semantic segmentation of urban scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 0–0, 2019.
277
+
278
+ Joao Carreira, Rui Caseiro, Jorge Batista, and Cristian Sminchisescu. Semantic segmentation with secondorder pooling. In *Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence,*
279
+ Italy, October 7-13, 2012, Proceedings, Part VII 12, pp. 430–443. Springer, 2012.
280
+
281
+ Cheng Chen, Qi Dou, Hao Chen, Jing Qin, and Pheng-Ann Heng. Synergistic image and feature adaptation:
282
+ Towards cross-modality domain adaptation for medical image segmentation. In *Proceedings of the AAAI* conference on artificial intelligence, volume 33, pp. 865–872, 2019.
283
+
284
+ Olivier Commowick, Michaël Kain, Romain Casey, Roxana Ameli, Jean-Christophe Ferré, Anne Kerbrat, Thomas Tourdias, Frédéric Cervenansky, Sorina Camarasu-Pop, Tristan Glatard, et al. Multiple sclerosis lesions segmentation from multiple experts: The miccai 2016 challenge dataset. *Neuroimage*, 244:118589, 2021.
285
+
286
+ Pierrick Coupé, Pierre Yger, Sylvain Prima, Pierre Hellier, Charles Kervrann, and Christian Barillot. An optimized blockwise nonlocal means denoising filter for 3-d magnetic resonance images. *IEEE transactions* on medical imaging, 27(4):425–441, 2008.
287
+
288
+ Hengfei Cui, Chang Yuwen, Lei Jiang, Yong Xia, and Yanning Zhang. Bidirectional cross-modality unsupervised domain adaptation using generative adversarial networks for cardiac image segmentation. Computers in Biology and Medicine, 136:104726, 2021.
289
+
290
+ Sofien Dhouib, Ievgen Redko, and Carole Lartizien. Margin-aware adversarial domain adaptation with optimal transport. In *Thirty-seventh International Conference on Machine Learning*, 2020.
291
+
292
+ Getao Du, Xu Cao, Jimin Liang, Xueli Chen, and Yonghua Zhan. Medical image segmentation based on u-net: A review. *Journal of Imaging Science & Technology*, 64(2), 2020.
293
+
294
+ Rui Gong, Dengxin Dai, Yuhua Chen, Wen Li, and Luc Van Gool. mdalu: Multi-source domain adaptation and label unification with partial datasets. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8876–8885, 2021.
295
+
296
+ Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In *Advances in Neural Information Processing* Systems, pp. 2672–2680, 2014.
297
+
298
+ Jiang Guo, Darsh Shah, and Regina Barzilay. Multi-source domain adaptation with mixture of experts. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4694–4703, 2018.
299
+
300
+ Ali Hatamizadeh, Vishwesh Nath, Yucheng Tang, Dong Yang, Holger R Roth, and Daguang Xu. Swin unetr:
301
+ Swin transformers for semantic segmentation of brain tumors in mri images. In International MICCAI Brainlesion Workshop, pp. 272–284. Springer, 2021.
302
+
303
+ Jianzhong He, Xu Jia, Shuaijun Chen, and Jianzhuang Liu. Multi-source domain adaptation with collaborative learning for semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11008–11017, 2021.
304
+
305
+ Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei Efros, and Trevor Darrell. Cycada: Cycle-consistent adversarial domain adaptation. In International conference on machine learning, pp. 1989–1998. Pmlr, 2018.
306
+
307
+ Yuankai Huo, Zhoubing Xu, Hyeonsoo Moon, Shunxing Bao, Albert Assad, Tamara K Moyo, Michael R
308
+ Savona, Richard G Abramson, and Bennett A Landman. Synseg-net: Synthetic segmentation without target modality ground truth. *IEEE transactions on medical imaging*, 38(4):1016–1025, 2018.
309
+
310
+ Fabian Isensee, Philipp Kickingereder, Wolfgang Wick, Martin Bendszus, and Klaus H Maier-Hein. Brain tumor segmentation and radiomics survival prediction: Contribution to the brats 2017 challenge. In *Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: Third International Workshop,*
311
+ BrainLes 2017, Held in Conjunction with MICCAI 2017, Quebec City, QC, Canada, September 14, 2017, Revised Selected Papers 3, pp. 287–297. Springer, 2018.
312
+
313
+ Ali Işın, Cem Direkoğlu, and Melike Şah. Review of mri-based brain tumor image segmentation using deep learning methods. *Procedia Computer Science*, 102:317–324, 2016.
314
+
315
+ Mehran Javanmardi and Tolga Tasdizen. Domain adaptation for biomedical image segmentation using adversarial training. In *2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018)*,
316
+ pp. 554–558. IEEE, 2018.
317
+
318
+ Ferenc A Jolesz, Arya Nabavi, and Ron Kikinis. Integration of interventional mri with computer-assisted surgery. *Journal of Magnetic Resonance Imaging*, 13(1):69–77, 2001.
319
+
320
+ Gökay Karayegen and Mehmet Feyzi Aksahin. Brain tumor prediction on mr images with semantic segmentation by using deep learning network and 3d imaging of tumor region. *Biomedical Signal Processing and* Control, 66:102458, 2021.
321
+
322
+ A. Emre Kavur, N. Sinem Gezer, Mustafa Barış, Sinem Aslan, Pierre-Henri Conze, Vladimir Groza, Duc Duy Pham, Soumick Chatterjee, Philipp Ernst, Savaş Özkan, Bora Baydar, Dmitry Lachinov, Shuo Han, Josef Pauli, Fabian Isensee, Matthias Perkonigg, Rachana Sathish, Ronnie Rajan, Debdoot Sheet, Gurbandurdy Dovletov, Oliver Speck, Andreas Nürnberger, Klaus H. Maier-Hein, Gözde Bozdağı Akar, Gözde Ünal, Oğuz Dicle, and M. Alper Selver. CHAOS Challenge - combined (CTMR) healthy abdominal organ segmentation. *Medical Image Analysis*, 69:101950, April 2021. ISSN
323
+ 1361-8415. doi: https://doi.org/10.1016/j.media.2020.101950. URL http://www.sciencedirect.com/ science/article/pii/S1361841520303145.
324
+
325
+ Philipp Kickingereder, Fabian Isensee, Irada Tursunova, Jens Petersen, Ulf Neuberger, David Bonekamp, Gianluca Brugnara, Marianne Schell, Tobias Kessler, Martha Foltyn, et al. Automated quantitative tumour response assessment of mri in neuro-oncology with artificial neural networks: a multicentre, retrospective study. *The Lancet Oncology*, 20(5):728–740, 2019.
326
+
327
+ Frithjof Kruggel, Jessica Turner, L Tugan Muftuler, Alzheimer's Disease Neuroimaging Initiative, et al.
328
+
329
+ Impact of scanner hardware and imaging protocol on image quality and compartment volume precision in the adni cohort. *Neuroimage*, 49(3):2123–2133, 2010.
330
+
331
+ Konstantin Levinski, Alexei Sourin, and Vitali Zagorodnov. Interactive surface-guided segmentation of brain mri data. *Computers in Biology and Medicine*, 39(12):1153–1160, 2009.
332
+
333
+ Yitong Li, michael Murias, geraldine Dawson, and David E Carlson. Extracting relationships by multi-domain matching. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), *Advances in Neural Information Processing Systems*, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/
334
+ 2fd0fd3efa7c4cfb034317b21f3c2d93-Paper.pdf.
335
+
336
+ Jianwei Liu and Lei Guo. A new brain mri image segmentation strategy based on wavelet transform and k-means clustering. In 2015 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), pp. 1–4. IEEE, 2015.
337
+
338
+ Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp. 3431–3440, 2015a.
339
+
340
+ Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In *Proceedings of International Conference on Machine Learning*, pp. 97–105, 2015b.
341
+
342
+ Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I. Jordan. Deep transfer learning with joint adaptation networks. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of *Proceedings of Machine Learning Research*, pp. 2208–2217. PMLR,
343
+ 06–11 Aug 2017. URL http://proceedings.mlr.press/v70/long17a.html.
344
+
345
+ Pauline Luc, Camille Couprie, Soumith Chintala, and Jakob Verbeek. Semantic segmentation using adversarial networks. In *NIPS Workshop on Adversarial Training*, 2016.
346
+
347
+ Dhiraj Maji, Prarthana Sigedar, and Munendra Singh. Attention res-unet with guided decoder for semantic segmentation of brain tumors. *Biomedical Signal Processing and Control*, 71:103077, 2022.
348
+
349
+ Leland McInnes, John Healy, Nathaniel Saul, and Lukas Großberger. UMAP: Uniform manifold approximation and projection. *Journal of Open Source Software*, 3(29):861, 2018.
350
+
351
+ Pietro Morerio, Jacopo Cavazza, and Vittorio Murino. Minimal-entropy correlation alignment for unsupervised deep domain adaptation. In *ICLR*, 2018.
352
+
353
+ Yifan Niu and Weihong Deng. Federated learning for face recognition with gradient correction. In *Proceedings* of the AAAI Conference on Artificial Intelligence, volume 36, pp. 1999–2007, 2022.
354
+
355
+ Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1406–1415, 2019a.
356
+
357
+ Xingchao Peng, Zijun Huang, Yizhe Zhu, and Kate Saenko. Federated adversarial domain adaptation, 2019b.
358
+
359
+ Anindya Apriliyanti Pravitasari, Nur Iriawan, Mawanda Almuhayar, Taufik Azmi, Irhamah Irhamah, Kartika Fithriasari, Santi Wulan Purnami, and Widiana Ferriastuti. Unet-vgg16 with transfer learning for mri-based brain tumor segmentation. *TELKOMNIKA (Telecommunication Computing Electronics and* Control), 18(3):1310–1318, 2020.
360
+
361
+ J. Rabin, G. Peyré, J. Delon, and M. Bernot. Wasserstein barycenter and its application to texture mixing.
362
+
363
+ In *International Conference on Scale Space and Variational Methods in Computer Vision*, pp. 435–446. Springer, 2011.
364
+
365
+ Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp. 234–241. Springer, 2015a.
366
+
367
+ Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation, 2015b.
368
+
369
+ Mohammad Rostami, Soheil Kolouri, Praveen K Pilly, and James McClelland. Generative continual concept learning. In *AAAI*, pp. 5545–5552, 2020.
370
+
371
+ S. Sankaranarayanan, Y. Balaji, C. D Castillo, and R. Chellappa. Generate to adapt: Aligning domains using generative adversarial networks. In *CVPR*, 2018a.
372
+
373
+ Swami Sankaranarayanan, Yogesh Balaji, Arpit Jain, Ser Nam Lim, and Rama Chellappa. Learning from synthetic data: Addressing domain shift for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3752–3761, 2018b.
374
+
375
+ Lei Song, Chunguang Ma, Guoyin Zhang, and Yun Zhang. Privacy-preserving unsupervised domain adaptation in federated setting. *IEEE Access*, 8:143233–143240, 2020.
376
+
377
+ Alexei Sourin, Shamima Yasmin, and Vitali Zagorodnov. Segmentation of mri brain data using a haptic device. In Proceedings of the 10th IEEE International Conference on Information Technology and Applications in Biomedicine, pp. 1–4. IEEE, 2010.
378
+
379
+ Yongheng Sun, Duwei Dai, and Songhua Xu. Rethinking adversarial domain adaptation: Orthogonal decomposition for unsupervised domain adaptation in medical image segmentation. *Medical Image Analysis*,
380
+ 82:102623, 2022.
381
+
382
+ Onur Tasar, Yuliya Tarabalka, Alain Giros, Pierre Alliez, and Sébastien Clerc. Standardgan: Multi-source domain adaptation for semantic segmentation of very high resolution satellite images by data standardization.
383
+
384
+ In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops*,
385
+ pp. 192–193, 2020.
386
+
387
+ Yi-Hsuan Tsai, Wei-Chih Hung, Samuel Schulter, Kihyuk Sohn, Ming-Hsuan Yang, and Manmohan Chandraker. Learning to adapt structured output space for semantic segmentation. In *Proceedings of the IEEE*
388
+ Conference on Computer Vision and Pattern Recognition, pp. 7472–7481, 2018.
389
+
390
+ Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation.
391
+
392
+ In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition*, pp. 7167–7176, 2017.
393
+
394
+ Naveen Venkat, Jogendra Nath Kundu, Durgesh Kumar Singh, Ambareesh Revanur, and R Venkatesh Babu.
395
+
396
+ Your classifier can secretly suffice multi-source domain adaptation. In *NeurIPS*, 2020.
397
+
398
+ Shanshan Wang, Cheng Li, Rongpin Wang, Zaiyi Liu, Meiyun Wang, Hongna Tan, Yaping Wu, Xinfeng Liu, Hui Sun, Rui Yang, et al. Annotation-efficient deep learning for automatic medical image segmentation. Nature communications, 12(1):5915, 2021.
399
+
400
+ Zirui Wang, Zihang Dai, Barnabás Póczos, and Jaime Carbonell. Characterizing and avoiding negative transfer. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp.
401
+
402
+ 11293–11302, 2019.
403
+
404
+ Yi-Chia Wei, Wen-Yi Huang, Chih-Yu Jian, Chih-Chin Heather Hsu, Chih-Chung Hsu, Ching-Po Lin, ChiTung Cheng, Yao-Liang Chen, Hung-Yu Wei, and Kuan-Fu Chen. Semantic segmentation guided detector for segmentation, classification, and lesion mapping of acute ischemic stroke in mri images. *NeuroImage:* Clinical, 35:103044, 2022.
405
+
406
+ Junfeng Wen, Russell Greiner, and Dale Schuurmans. Domain aggregation networks for multi-source domain adaptation. In Hal Daumé III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of *Proceedings of Machine Learning Research*, pp. 10214–10224. PMLR,
407
+ 13–18 Jul 2020. URL http://proceedings.mlr.press/v119/wen20b.html.
408
+
409
+ Jeffry Wicaksana, Zengqiang Yan, Dong Zhang, Xijie Huang, Huimin Wu, Xin Yang, and Kwang-Ting Cheng. Fedmix: Mixed supervised federated learning for medical image segmentation. *IEEE Transactions* on Medical Imaging, 2022.
410
+
411
+ Ruijia Xu, Ziliang Chen, Wangmeng Zuo, Junjie Yan, and Liang Lin. Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In *Proceedings of the IEEE Conference on Computer* Vision and Pattern Recognition, pp. 3964–3973, 2018.
412
+
413
+ Yonghao Xu, Bo Du, Lefei Zhang, Qian Zhang, Guoli Wang, and Liangpei Zhang. Self-ensembling attention networks: Addressing domain shift for semantic segmentation. In *Proceedings of the AAAI Conference on* Artificial Intelligence, volume 33, pp. 5581–5588, 2019.
414
+
415
+ Sijie Yang, Fei Zhu, Xinghong Ling, Quan Liu, and Peiyao Zhao. Intelligent health care: Applications of deep learning in computational medicine. *Frontiers in Genetics*, 12:607471, 2021.
416
+
417
+ Sicheng Zhao, Bo Li, Xiangyu Yue, Yang Gu, Pengfei Xu, Runbo Hu, Hua Chai, and Kurt Keutzer. Multisource domain adaptation for semantic segmentation. *Advances in neural information processing systems*,
418
+ 32, 2019.
419
+
420
+ Sicheng Zhao, Guangzhi Wang, Shanghang Zhang, Yang Gu, Yaxian Li, Zhichao Song, Pengfei Xu, Runbo Hu, Hua Chai, and Kurt Keutzer. Multi-source distilling domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 12975–12983, 2020.
421
+
422
+ J. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In *ICCV*, pp. 2223–2232, 2017.
423
+
424
+ Yongchun Zhu, Fuzhen Zhuang, and Deqing Wang. Aligning domain-specific distribution and classifier for cross-domain classification from multiple sources. In *Proceedings of the AAAI Conference on Artificial* Intelligence, volume 33, pp. 5989–5996, 2019.
425
+
426
+ Yang Zou, Zhiding Yu, BVK Kumar, and Jinsong Wang. Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In Proceedings of the European conference on computer vision (ECCV), pp. 289–305, 2018.
427
+
428
+ ## A Appendix A.1 Optimal Transport For Domain Adaptation
429
+
430
+ Optimal Transport (OT) is a probability metric based on determining an optimal transportation of probability mass between two distributions. Given two probability distributions µ and ν over domains X and Y ,
431
+ OT is defined as:
432
+
433
+ $$W(\mu,\nu)=\operatorname*{inf}_{\gamma\in\Pi(\mu,\nu)}\int_{X\times Y}d(x,y)d\gamma(x,y)$$
434
+ $$\left(5\right)$$
435
+ d(x, y)dγ(*x, y*) (5)
436
+ where Π(*µ, ν*) represents the set of all joint distributions γ(*x, y*) with marginals µ and ν on X and Y ,
437
+ respectively. The transportation cost is denoted as d(·, ·), which can vary based on the specific application.
438
+
439
+ For instance, in many UDA methods, the Euclidean distance is used. However, computing the OT involves solving a complex optimization problem and can be computationally burdensome. Alternatively, SWD
440
+ reduces the computational complexity while retaining the foundational benefits of OT.
441
+
442
+ ## A.2 The Segmentation Architecture
443
+
444
+ Figure 7 presents the architecture of the 3D U-Net that we used in our experiments.
445
+
446
+ ![17_image_0.png](17_image_0.png)
447
+
448
+ Figure 7: 3D-UNET architecture
449
+
450
+ ## A.3 Details Of Setting The Optimization Method
451
+
452
+ We used ADAM because it is well-suited for problems that are large in scale and have sparse gradients. It combines the advantages of both Adaptive Gradient Algorithm (AdaGrad) and Root Mean Square Propagation (RMSProp), allowing it to handle non-stationary objectives and noisy gradients.
453
+
454
+ - **Initialization:** We initialized the weights to be optimized and set hyperparameters such as the learning rate, first and second moment estimates, and smoothing terms according to common best practices.
455
+
456
+ - **Iteration:** During each iteration, the optimizer computes the gradients of the DICE score with respect to the weights, and updates the weights in a direction that is expected to increase the DICE
457
+ score.
458
+
459
+ - **Adaptive Learning Rates:** ADAM dynamically adjusts the learning rates during optimization, using both momentum (moving average of the gradient) and variance scaling. This makes it robust to changes in the landscape of the objective function.
460
+
461
+ - **Termination Criteria:** The optimization was terminated upon convergence, which could be determined by a number of epochs or a tolerable change in the DICE score.
ii6heihoen/ii6heihoen_meta.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "languages": null,
3
+ "filetype": "pdf",
4
+ "toc": [],
5
+ "pages": 19,
6
+ "ocr_stats": {
7
+ "ocr_pages": 0,
8
+ "ocr_failed": 0,
9
+ "ocr_success": 0,
10
+ "ocr_engine": "none"
11
+ },
12
+ "block_stats": {
13
+ "header_footer": 19,
14
+ "code": 0,
15
+ "table": 4,
16
+ "equations": {
17
+ "successful_ocr": 10,
18
+ "unsuccessful_ocr": 0,
19
+ "equations": 10
20
+ }
21
+ },
22
+ "postprocess_stats": {
23
+ "edit": {}
24
+ }
25
+ }