context
stringlengths
100
5.69k
A
stringlengths
100
3.76k
B
stringlengths
100
3.61k
C
stringlengths
100
5.61k
D
stringlengths
100
3.87k
label
stringclasses
4 values
The MoNuSAC20 dataset [71] was designed to be representative of various organs and nucleus types relevant to tumor research. Specifically, it included Lymphocytes, Epithelial, Macrophages, and Neutrophils. The training data consisted of cropped whole slide images (WSIs) obtained from 32 hospitals and 46 patients from TCGA [72] data portal, scanned at a 40×\times× magnification. The dataset provides nuclei class labels along with nuclear boundary annotations. The testing data followed a similar preparation procedure but included annotations for ambiguous regions. These are regions with faint nuclei, unclear boundaries, or where the true class is not confirmed by annotators. The testing data comprised 25 patient samples from 19 different hospitals, with 14 hospitals overlapping with the training dataset.
For the MoNuSeg18 dataset [1], 256×256256256256\times 256256 × 256 dimension patches (images and masks) from 1000×1000100010001000\times 10001000 × 1000 images were extracted to use for training and testing purposes of segmentation models. In order to prevent testing set leakage and inaccurate assessment metrics, it was also made sure that testing patches stayed in the testing set and training patches stayed in the training set. The size of the dataset is increased by employing a variety of augmentation techniques during training step, such as RandomAffine, PhotoMetricDistortion, Random Horizontal and Vertical Flip with 0.50.50.50.5 flip probability. The step-by-step outcome of these pre-processing steps is shown in figure 3 for MoNuSeg18 [1] dataset. Considering the modality differences between ImageNet and histopathology images, we calculated normalization parameters (mean=[171.31, 119.69, 157.71], std=[56.04, 59.61, 47.69]) and used them for image normalization during training and testing phases.
The MoNuSeg 2018 challenge provided a challenging dataset [1] comprising images from 7 different organs: (1) breast, (2) colon, (3) bladder, (4) stomach, (5) kidney, (6) liver, and (7) prostate. Also, images acquired from 18 different hospitals, practicing different staining techniques and image acquisition equipment, add another source of variation and ensure the diversity of nuclear appearances. The training data consists of 30 tissue images (1000×\times×1000 resolution), 7 validation images, and 14 test images. The training dataset consists of 21623 annotated manually nuclear boundaries. For each selected individual patient from TCGA [72], an image was extracted from a distinct whole slide image (WSI) that was scanned at 40×\times× magnification. Sub-images were selected from regions containing a high density of nuclei. To ensure diversity in the dataset, only one crop per WSI and patient was included. The test comprises 14 images spanning 5 organs common with the training set: (1) breast, (2) colon, (3) bladder, (4) kidney, (5) liver, and 2 organs different from the testing set: (1) lung, (2) brain, to make the test set more challenging. The test set contains approximately 7,223 annotated nuclear boundaries.
For the MoNuSAC20 dataset [71], the same pre-processing was applied as for the MoNuSeg18, i.e. 256×256256256256\times 256256 × 256 dimension patches (images and masks) were extracted for training and testing. The same augmentation techniques were applied to increase the dataset size and robustness of the model. During the training and testing stages of MoNuSAC20, the ImageNet normalization parameters mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375] were applied because they produced good results on this dataset.
Figure 3: Data Pre-processing Pipeline visualized for MoNuSeg18 dataset. From left to right: Original Image (resized to 256×256256256256\times 256256 × 256), Random Affine (combination of Shift, Scale, and Rotate), Random Flip (either Horizontal or Vertical), PhotoMetric Distortion (changes the intensity of pixels), Padding (to ensure 256×256256256256\times 256256 × 256 image size), Final Augmented Input and Mask image are shown.
A
Large foundation models pre-trained on large-scale datasets, are transforming the landscape with powerful zero-shot capabilities [56, 27, 55, 54]. These foundational models showcase an impressive ability in adapting to the tasks not seen during training. A standout example is the Segment Anything Model (SAM) [27], which has gained great success for the zero-shot image segmentation. The strength of SAM lies in its interactive segmentation paradigm: the model segments the target following the user-given prompts, such as a point, a bounding box (BBox), or free text-like descriptions.
Medical image segmentation, as a unique component of the image segmentation, plays a vital role in real-world clinical practices, including disease diagnosis and image-guided surgery. Many efforts have been made on bring this interactive foundation model to the medical image segmentation through fine-tuning [58, 13, 36]. However, most of them still need to re-training the model for each new task, leading to actually a loss of zero-shot generalization. Additionally, in these interactive models, the users have to provide prompts for each image, which is time-consuming and inapplicable for building the automatic pipeline.
Interactive segmentation models achieve zero-shot generalization by prompting each of the test sample. When we offer the One-Prompt Model with the same query image and prompted template image, the model degrades to a standard interactive segmentation model. We compare this setting with other interactive segmentation models, including vanilla SAM [27], SAM-U [14], VMN [61], iSegFormer[34], MedSAM [37], MSA [58], and SAM-Med2D [13]. Except vanilla SAM, all models are trained on the same dataset as ours. Since most of these models only accept Click and BBox prompts, Doodle and SegLab prompt settings are not included in this comparison. Since all these models need the prompt on each input image, we simulate the oracle prompts (details in Section 4.5: Effect of prompt quality & types in the inference) over the images if needed. It is worth noting that we did not re-train One-Prompt Model on the simulated prompts: we use the same trained One-Prompt Model as that in the last section, but only offered the simulated prompts in testing for the possible of comparison.
Figure 1: Medical segmentation involves a wide range of different organs, tissues and anatomies. One-Prompt Segmentation is a novel paradigm to building a foundation model that can generalize to unseen tasks. Given an unseen task, One-Prompt Model only needs the users to prompt one image to grasp the task, which is notably cost-effective comparing with interactive and few-shot segmentation.
In this paper, we introduce a new paradigm for the universal medical image segmentation, called One-Prompt Medical Image Segmentation. This method combines the strengths of both one-shot and interactive models to meet the real clinical requirements. Specifically, given an unseen task, the user only needs to provide one prompted sample to the trained model, then it can perform well at this new task without any retraining or fine-tuning, even for tasks significantly different from those encountered during training. An illustration is shown in Fig. 1.
A
The values assigned to these variables are not fixed and are estimated based on empirical observations. They are subject to change to accommodate the dynamic nature of relay volumes.
An experiment was conducted to examine this. Using a target of T=104𝑇superscript104T=10^{4}italic_T = 10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT claims per block, the variability and bias of the estimated number of relays Rd⁢A⁢p⁢psubscript𝑅𝑑𝐴𝑝𝑝R_{dApp}italic_R start_POSTSUBSCRIPT italic_d italic_A italic_p italic_p end_POSTSUBSCRIPT were calculated. The difficulty d𝑑ditalic_d (inverse of hash collision probability p𝑝pitalic_p) ranged from 1.251.251.251.25 relays per claim to 1000100010001000 relays per claim, and the dApp participation v𝑣vitalic_v ranged from 0.1%percent0.10.1\%0.1 % to 10%percent1010\%10 % of total blockchain traffic. For each test point, a total of I=105𝐼superscript105I=10^{5}italic_I = 10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT draws from the resulting variable x∼B⁢(R,v,pb)similar-to𝑥𝐵𝑅𝑣subscript𝑝𝑏x\sim B(R,v,p_{b})italic_x ∼ italic_B ( italic_R , italic_v , italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ) were sampled. The bias and variability of the estimated dApp traffic were then calculated as follows:
Target claims by block : T=104𝑇superscript104T=10^{4}italic_T = 10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT
1:T←104←𝑇superscript104T\leftarrow 10^{4}italic_T ← 10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT▷▷\triangleright▷ Target claims by blockchain.
1:A⁢p⁢p⁢S⁢t⁢a⁢k⁢e←106←𝐴𝑝𝑝𝑆𝑡𝑎𝑘𝑒superscript106AppStake\leftarrow 10^{6}italic_A italic_p italic_p italic_S italic_t italic_a italic_k italic_e ← 10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT▷▷\triangleright▷ Amount of staked Applications tokens
C
As for semantic information, we use pre-trained BERT [23] is used to capture it at the character level.
Both emphasis positions and linguistic embedding are generated from extracted linguistic information.
EE-TTS consists of 1) a linguistic information extractor to extract the syntactic and semantic information from the text; 2) an emphasis predictor to predict the positions of emphasis according to linguistic information; and 3) a conditioned acoustic model to generate expressive speech conditioned on emphasis positions and linguistic information.
We choose FastSpeech2 [3] as the base architecture of the acoustic model, leveraging emphasis positions and linguistic embedding as conditions. These conditions are obtained through the emphasis predictor and the linguistic encoder respectively.
The acoustic model synthesized the speech conditioned on emphasis positions and linguistic embedding.
D
The music production community also implements acoustic matching to modify the reverberation, thus simulating the reverberation of the target space or processing algorithm Koo et al. (2021); Sarroff and Michaels (2020). Recently, there is research on visual acoustic matching Chen et al. (2022), which involves generating audio recorded in the target environment based on the input source audio clip and an image of the target environment. However, our proposed visual TTS is distinct from those mentioned above as as it aims to generate audio that captures the room acoustics in the target environment based on the written text and the target environment image.
Training visual text-to-speech models typically requires a large amount of parallel target environment image and audio training data, while there may be very few resources due to the heavy workload. In this section, we prepare low-resource audio-visual data (1h/2h/5h) and leverage large-scale text-only and audio-only data to boost the performance of the visual TTS system, to investigate the effectiveness of our self-supervised learning methods. The results are compiled and presented in Table 3, and we have the following observations:
To enhance visual-acoustic matching, we 1) propose the visual-text fusion to integrate visual and textual information, which provides fine-grained language-visual reasoning by attending to regions of the image; 2) leverage transformer architecture to promote the scalability of the diffusion model. Regarding the data shortage challenge, we pre-train the encoder and decoder in a self-supervised manner, showing that large-scale pre-training reduces data requirements for training visual TTS models.
To mitigate the data scarcity for training visual TTS tasks and model visual acoustic information, we 1) introduced a self-supervised learning framework to enhance both the visual-text encoder and denoiser decoder; 2) leveraged the diffusion transformer scalable in terms of parameters and capacity to improve performance.
The overall architecture has been presented as Figure  1. To alleviate the issue of data scarcity, we leverage unlabeled data to pre-train the visual-text encoder and denoiser decoder with scalable transformers in a self-supervised manner. To capture the visual scene information, we employ the visual-text fusion module to reason about how different image patches contribute to texts. BigvGAN (Lee et al., 2022) converts the mel-spectrograms into audio that matches the target scene as a neural vocoder.
D
6.61×10116.61superscript10116.61\times 10^{11}6.61 × 10 start_POSTSUPERSCRIPT 11 end_POSTSUPERSCRIPT
7.32×10107.32superscript10107.32\times 10^{10}7.32 × 10 start_POSTSUPERSCRIPT 10 end_POSTSUPERSCRIPT
1.68×10111.68superscript10111.68\times 10^{11}1.68 × 10 start_POSTSUPERSCRIPT 11 end_POSTSUPERSCRIPT
6.61×10116.61superscript10116.61\times 10^{11}6.61 × 10 start_POSTSUPERSCRIPT 11 end_POSTSUPERSCRIPT
1.94×10111.94superscript10111.94\times 10^{11}1.94 × 10 start_POSTSUPERSCRIPT 11 end_POSTSUPERSCRIPT
B
The aforementioned architectures are part of the WaveFake dataset (Frank & Schönherr, 2021), which we will study in detail.
To ensure our detectors identify the newest generators, we extend the dataset proposed by Frank & Schönherr (2021) by adding two recent text-to-speech synthesis networks. We include the standard and large BigVGAN (Lee et al., 2023a) architecture as well as the Avocodo (Bak et al., 2022) network.
Our study reveals stable frequency domain artifacts for many modern speech synthesis networks. We visualize generator artifacts for all generators in the WaveFake dataset (Frank & Schönherr, 2021) generators and the Avocodo (Bak et al., 2022) and BigVGAN (Lee et al., 2023a) networks.
We extend the WaveFake dataset by adding samples drawn from the Avocodo (Bak et al., 2022) and BigVGAN (Lee et al., 2023a) architectures. Due to a lack of pre-trained weights, we retrained Avocodo using the publicly available implementation from (Bak et al., 2023) commit 2999557. We trained for 346 epochs or 563528 steps.
Furthermore, novel \acTTS systems have appeared since the publication of the WaveFake dataset. Lee et al. (2023a), for example, trained the biggest vocoder to date. Additionally, their architecture shifts to periodic activation functions. The authors report excellent generalization properties. Further, the parallelly developed Avocodo network (Bak et al., 2022) aims to reduce artifacts by removing low-frequency bias. We additionally include both architectures in our study.
D
More importantly, the uncertainty of FB can be captured by estimating the variance of 𝐭*superscript𝐭\mathbf{t^{*}}bold_t start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT using the Monte Carlo method as well.
More importantly, the uncertainty of FB can be captured by estimating the variance of 𝐭*superscript𝐭\mathbf{t^{*}}bold_t start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT using the Monte Carlo method as well.
Concretely, the variance of j𝑗jitalic_j-th element of 𝐭*superscript𝐭\mathbf{t^{*}}bold_t start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT is estimated by:
where tj*subscriptsuperscript𝑡𝑗{t^{*}_{j}}italic_t start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT represents the random variable of j𝑗jitalic_j-th FB, t^j,ksubscript^𝑡𝑗𝑘\hat{t}_{j,k}over^ start_ARG italic_t end_ARG start_POSTSUBSCRIPT italic_j , italic_k end_POSTSUBSCRIPT is the j𝑗jitalic_j-th element of k𝑘kitalic_k-th sampled prediction (Eq. 7), and Eq⁢[⋅]subscript𝐸𝑞delimited-[]⋅E_{q}\left[\cdot\right]italic_E start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT [ ⋅ ] means the expectation based on the posterior distribution q⁢(𝐭*|𝐆*,𝒟)𝑞conditionalsuperscript𝐭superscript𝐆𝒟q(\mathbf{t^{*}}|\mathbf{G^{*}},\mathcal{D})italic_q ( bold_t start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT | bold_G start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT , caligraphic_D ).
where tjMsubscriptsuperscript𝑡M𝑗t^{\text{M}}_{j}italic_t start_POSTSUPERSCRIPT M end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT is the j𝑗jitalic_j-th element of 𝐭Msuperscript𝐭M\mathbf{t^{\text{M}}}bold_t start_POSTSUPERSCRIPT M end_POSTSUPERSCRIPT, and
B
Figure 8 illustrates the reliability diagrams and prediction interval widths for 1-step ahead forecasts. In contrast to cases 1 and 2, the reliability of the UI model deviates significantly from the ideal case. This deviation may be attributed to the case’s violation of the MAR/MCAR assumption upon which the UI model relies. Conversely, the proposed model demonstrates greater suitability for MNAR cases, as it operates without such assumptions.
Table 2: The CRPS values of forecasts by the proposed and benchmark models with different lead times in case 2 (%).
Table 3: The CRPS values of forecasts by the proposed and benchmark models with different lead times in case 3 (percentage).
Table 2 displays the CRPS values for forecasts generated by both the proposed and benchmark models. In this case, the differences in CRPS values among all models are smaller compared to those in case 1. Unlike case 1, missingness occurs in blocks, resulting in a greater number of samples with complete observations. Consequently, the impact of missing values on the quality of forecasts is reduced. Among models employing the “impute, then predict” strategy, DeepAR continues to exhibit the poorest performance, although the difference between DeepAR and IM-Gaussian/IM-QR is smaller than in case 1. In contrast, the performance of the proposed and UI models remains superior to that of “impute, then predict” strategy-based models and is comparable to the reference model. This implies the applicability of the proposed and UI models to cases with both sporadic and block-wise missingness.
Table 1: The CRPS values of forecasts by the proposed and benchmark models with different lead times in case 1 (%).
B
Let m∈ℕ𝑚ℕm\in\mathbb{N}italic_m ∈ blackboard_N with m≥1𝑚1m\geq 1italic_m ≥ 1, α>0𝛼0\alpha>0italic_α > 0, and p0∈H2⁢m+1⁢(ℝn,ℝ)subscript𝑝0superscript𝐻2𝑚1superscriptℝ𝑛ℝp_{0}\in H^{2m+1}(\mathbb{R}^{n},\mathbb{R})italic_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∈ italic_H start_POSTSUPERSCRIPT 2 italic_m + 1 end_POSTSUPERSCRIPT ( blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT , blackboard_R ). There exist (a∗,b∗)∈ℋm+subscript𝑎subscript𝑏subscriptsuperscriptℋ𝑚(a_{*},b_{*})\in\mathcal{H}^{+}_{m}( italic_a start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT , italic_b start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ) ∈ caligraphic_H start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT and a (unique) solution X𝑋Xitalic_X to SDE with coefficients (a∗,b∗)∈ℋm+subscript𝑎subscript𝑏subscriptsuperscriptℋ𝑚(a_{*},b_{*})\in\mathcal{H}^{+}_{m}( italic_a start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT , italic_b start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ) ∈ caligraphic_H start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT.
is continuous. By combining this latter property with (26) and Theorem 8.2, one readily checks that the curve of probabilities μ𝜇\muitalic_μ satisfies:
Thanks to the results we gathered in Section 3, one readily checks that Assumption (A)𝐴(A)( italic_A ) yields the following characterization of the mapping X𝑋Xitalic_X:
Before moving to the core of this section, by leveraging the facts we recalled in Section 4, we provide a crucial approximation results for coefficients (a,b)∈ℋm,R∗+𝑎𝑏subscriptsuperscriptℋ𝑚subscript𝑅(a,b)\in\mathcal{H}^{+}_{m,R_{*}}( italic_a , italic_b ) ∈ caligraphic_H start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m , italic_R start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT end_POSTSUBSCRIPT. Specifically, by combining the bounds for Sobolev functions with scattered zeros we listed in Section 4 with Theorem 2.1, we obtain the following:
The paper is organized as follows. After gathering basic notation and preliminary results in Section 2, in Section 3 we summarize both classical and less classical results about stochastic differential equations and corresponding relationships with the Fokker-Planck equation. To ease the reading of this section, we moved a more detailed description of the aforementioned results to Section 8 and their technical proofs to Appendix A. Similarly, in Section 4 we expose the results in RKHS theory which we leverage in this work. Our main contributions are contained in Section 5, which in particular details the methodologies we exposed at the previous steps 1) and 2) and corresponding learning rates. Precision and computational complexity of our approach are discussed in Section 6. Finally, in Section 7 we provide concluding remarks and some perspectives.
B
Most of the previous token pruning works focus on NLP tasks, including PoWER-BERT [15], Length-Adaptive Transformer [19], SpAtten [39], TR-BERT [42], and Learned Token Pruning [20]. For CV tasks, a typical token pruning work is DynamicViT [30].
It inserts prediction modules between transformer blocks to predict and drop less informative tokens. The prediction
As discussed previously, it is valuable to measure similarity even between important tokens and perform further pruning.
The overall Zero-TPrune framework is shown in Fig. 2. Each pruning layer is composed of multiple stages and can be inserted anywhere between Transformer blocks. The I-stage and S-stage enable Zero-TPrune to take both importance and similarity into consideration. The objective of the I-stage is to obtain an importance score distribution on tokens and retain the top-k𝑘kitalic_k important tokens. To achieve this objective, we propose the WPR algorithm and use the attention matrix from the pre-trained Transformer block. In the S-stage, we measure the similarity between tokens based on their embedding vectors and retain only one token in the top-r𝑟ritalic_r similar pairs. To reduce computational overheads from all pair-wise combinations, we partition tokens into bipartite groups. Tokens in the same group are never paired to measure similarity. To have improved control over the importance distribution of pruned tokens, we guide the partitioning by their importance rank.
Figure 2: The overall Zero-TPrune framework. Pruning layers can be inserted between Transformer blocks to reduce
A
The folder named ‘Anti-noise experiment’ contains the source code of PSD performance evaluation of all seven PSD methods previously mentioned in various noise scenarios.
To evaluate the anti-noise ability of the PSD methodologies, one can simply run the ‘main.m’ program, which performs PSD experiments on datasets consisting of raw neutron and gamma-ray pulse signals with additional Gaussian noise. The variance of the added noise ranges from 0.01 to 0.025. Discrimination processes are independently executed one hundred times for each method and noise level to obtain the average performance of each method under different noise conditions.
The differences between these two types of particles’ pulses locates at the falling edge and delayed fluorescence parts, ranging from approximately 80-100 ns and 100-150 ns, respectively.The filtered signal was obtained by applying a Fourier filter to remove low-frequency noise from the raw signal, which is a standard pre-processing step for PSD applications. This step improves the performance of discrimination methods by making them work in low-noise scenarios. Finally, the noise-enhanced signal was obtained by adding Gaussian noise with a variance of 0.001 to the raw signal, simulating extreme noise conditions. This file was provided to help evaluate the performance of PSD methods in high-noise scenarios.
The folder named ’Discrimination performance evaluation’ includes the source codes for comparing the PSD performance of the seven methodologies mentioned earlier. To evaluate the discrimination performance, one can run the ’main.m’ program with the default settings. This program performs discrimination factor calculation, histogram generation, and double Gaussian fitting for all discrimination methods discussed in the previous section. After this, the figure of merit (FOM) value is calculated for each discrimination method using the Gaussian fitting results and the formula given below,
The provided dataset contains radiation pulse signals captured from a neutron and gamma-ray superposed field, as well as source codes for several discrimination methodologies, both traditional and state-of-the-art. This dataset also includes evaluation criteria and the implementation of PSD performance assessment, facilitating easy comparison of the efficacy of different PSD methods. Additionally, anti-noise evaluation codes for PSD methods are included, allowing for validation of each method’s performance across varying noise levels. Overall, this dataset offers a comprehensive source of performance evaluation codes for PSD methodologies. It can be used to compare newly developed PSD methods with existing ones, thereby expediting advancements in the field of PSD.
A
Moreover, in the larger dataset comprising 1,000 real images and 2,000 synthetic images, the performance of Med-DDPM reached its peak with a Dice score of 0.6675, surpassing the baseline score of 0.6531 for real images and demonstrating its potential for data augmentation capabilities.
In scenarios involving 1,000 real images, 1,000 synthetic images, and their combinations, Med-DDPM consistently outperformed the baseline models. Specifically, in the experiment with solely 1,000 synthetic images, Med-DDPM achieved a Dice score of 0.6207, surpassing 3D DiscoGAN (0.4685) and 3D Pix2Pix (0.3171). In combined scenarios, such as 1,000 real images with 1,000 synthetic images, Med-DDPM maintained its lead with a Dice score of 0.6561, compared to 0.6239 for 3D DiscoGAN and 0.6343 for 3D Pix2Pix.
While 3D DiscoGAN and 3D Pix2Pix models demonstrated improvements in mixed data scenarios, they were consistently outclassed by Med-DDPM. Only 3D DiscoGAN achieved the highest precision score of 0.91 on 1,000 synthetic images, where the real precision score was 0.88. Additionally, it attained a precision score of 0.95 on combinations of 1,000 real and 2,000 synthetic images. This implies that the model has a low rate of false positives in its predictions and is highly proficient at accurately delineating the boundaries or regions corresponding to the positive class (i.e., the tumor class) in the segmentation task. Our proposed Med-DDPM also achieved a high precision score, closely approaching that of the 3D DiscoGAN.
Moreover, in the larger dataset comprising 1,000 real images and 2,000 synthetic images, the performance of Med-DDPM reached its peak with a Dice score of 0.6675, surpassing the baseline score of 0.6531 for real images and demonstrating its potential for data augmentation capabilities.
In the comparison of various generative models for evaluating the overall quality of synthetic brain MRIs (Fig. 2), Med-DDPM consistently outperformed baseline models such as 3D StyleGAN, HA-GAN, LDM, and 3D-α𝛼\alphaitalic_α-WGAN-GP, especially in maintaining the structural integrity and realistic representation of both normal brain tissue and pathological features like tumors. This was evident from the quantitative results presented in Table 1, where Med-DDPM achieved the lowest MSE score of 0.0146 and was closest to the real image, with an MS-SSIM score of 0.6132 compared to the MS-SSIM score of 0.5864 for real images. However, its higher 3D-FID score, compared to other models, suggests that there is room for further optimization in feature extraction specific to brain imaging.
B
We quantitatively compare the generation results with 4 state-of-the-art methods: Audio2Head [wang2021audio2head], PC-AVS [zhou2021pose], Wav2Lip [prajwal2020lip], and SyncTalkFace [park2022synctalkface] on LRW, LRS2, and HDTF datasets in Table 3. On LRW and HDTF, our method achieves the best on all the metrics, especially outperforming on the lip-sync. Compared to other methods that involve feature disentanglement (PC-AVS), assistant module (SyncTalkFace, Wav2lip), and intermediate structural representations (Audio2Head), we validate that exploiting phonetic context in modeling lip motion is a more powerful scheme to achieve accurate lip synchronization.
We perform two experiments to analyze to what extent context contributes to lip synchronization. First, we mask out 0.14s (7 frames) of the source audio and generate frames of the masked time steps. As shown in Fig.2, our work can still generate well-synchronized lips even with the absence of audio because it is able to attend to the surrounding phones and predict lip motion that fits into the masked region in context. In contrast, the previous works cannot generate correct lip movements because they do not consider surrounding phones. Such results verify that our model effectively incorporates context information in modeling lip movement of the talking face.
We quantitatively compare the generation results with 4 state-of-the-art methods: Audio2Head [wang2021audio2head], PC-AVS [zhou2021pose], Wav2Lip [prajwal2020lip], and SyncTalkFace [park2022synctalkface] on LRW, LRS2, and HDTF datasets in Table 3. On LRW and HDTF, our method achieves the best on all the metrics, especially outperforming on the lip-sync. Compared to other methods that involve feature disentanglement (PC-AVS), assistant module (SyncTalkFace, Wav2lip), and intermediate structural representations (Audio2Head), we validate that exploiting phonetic context in modeling lip motion is a more powerful scheme to achieve accurate lip synchronization.
We qualitatively compare the generation results in Fig.4. Our method is most precisely aligned in spatio-temporal dimension and generates the most temporally stable lips, distinct to each phone in context. For example in (a), when pronouncing ‘k\textipa@m’ in ‘combating’, our method wide opens the mouth and gradually closes with smooth transitioning. On the other hand, other methods are temporally misaligned and discontinuous: Audio2Head fails to completely close the mouth at ‘m’, Wav2Lip closes its mouth but with slightly projected lips, and SyncTalkFace fails to clearly open the mouth at phone ‘k’. The man in (b) is pronouncing ‘æskt’ in ‘asked for’. Our method is the only work that successfully captures the slightly projected lips at ‘t’ transitioning into ‘f’. Such results demonstrate that our method fully makes use of the phonetic context for temporally aligned and consistent lip synchronization.
Talking face generation aims to synthesize a photo-realistic portrait with lip motions in-sync with an input audio. Since the generation involves facial dynamics, especially of the mouth, viewers especially attend to the mouth region. Therefore, precisely aligning the mouth with the driving audio is critical for realistic talking face generation. In this paper, we focus on establishing audio-visual correlation for lip-syncing on arbitrary faces in the wild.
C
,\leavevmode\nobreak\ \tau>0.italic_h start_POSTSUBSCRIPT Diff end_POSTSUBSCRIPT ( italic_λ ) = italic_β start_POSTSUBSCRIPT Diff end_POSTSUBSCRIPT roman_exp ( - italic_τ italic_λ ) , italic_τ > 0 .
where a smooth graph filter is defined in Definition 1, and under ℋ1subscriptℋ1\mathcal{H}_{1}caligraphic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT the graph filter, h⁢(𝐋)ℎ𝐋{h}({\bf{L}})italic_h ( bold_L ), does not satisfy Definition 1.
ℋ0:the graph filter h⁢(𝐋) is smooth,ℋ1:the graph filter h⁢(𝐋) is not smooth,:subscriptℋ0the graph filter h⁢(𝐋) is smooth:subscriptℋ1the graph filter h⁢(𝐋) is not smooth\displaystyle\begin{array}[]{l}\mathcal{H}_{0}:\text{the graph filter ${h}({%
(according to Definition 2) is a smooth graph filter (according to Definition 1) under the condition
Any monotonically non-increasing graph filter with at least two distinct coefficients is a smooth graph filter according to Definition 1.
D
Later results in Tab. 4 show that our study-level pretraining leads to substantial accuracy gains at AS diagnosis compared to image-level pretraining.
Our pretraining strategy builds upon MoCo (He et al., 2020; Chen et al., 2020b), a recent method for self-supervised image-level contrastive learning (img-CL) that yields state-of-the-art representations via an instance discrimination task (Wu et al., 2018; Ye et al., 2019; Bachman et al., 2019).
Recently, self-supervision has been successfully applied to pretrain MIL models (Holste et al., 2022a, b; Liu et al., 2022; Lu et al., 2019; Li et al., 2021a; Saillard et al., 2021; Dehaene et al., 2020; Rymarczyk et al., 2023). However, these studies all apply self-supervised contrastive learning to representations of individual images.
Self-supervised learning (SSL) has demonstrated success in learning visual representations  (Chen et al., 2020a; He et al., 2020; Chen et al., 2020b; Grill et al., 2020; Caron et al., 2020; Chen and He, 2021; Huang et al., 2023a). SSL requires defining a pretext task such as
Here, we focus on the instance discrimination task (Wu et al., 2018) and InfoNCE loss (Oord et al., 2018) following the success of momentum contrastive learning (MoCo) (He et al., 2020; Chen et al., 2020b).
A
This work is partially funded by Theme based Research Scheme (T45-205/21-N), Research Grants Council of Hong Kong.
Emmanouil Benetos is supported by a RAEng/Leverhulme Trust Research Fellowship [grant number LTRF2223-19-106]. We acknowledge IT Services at The University of Sheffield for the provision of services for High Performance Computing.
This work is partially funded by Theme based Research Scheme (T45-205/21-N), Research Grants Council of Hong Kong.
This paper is a tribute to our talented friend Anqiao Yang, for his friendship and valuable advice to this work. Yizhi Li is a Ph.D. student fully funded by the Department of Computer Science, University of Manchester, UK.
Yinghao Ma is a research student at the UKRI Centre for Doctoral Training in Artificial Intelligence and Music, supported by UK Research and Innovation [grant number EP/S022694/1].
D
Recent advancements in RD modeling [38, 14, 30, 16, 35] have largely revolved around the vision transformer [7] (ViT) since its debut in the 2020s. The prowess of ViT-based models in RD is primarily due to their capacity to scale effectively; their performance improves as the model size grows [21]. Consequently, ViT-based methods typically exhibit superior performance over CNNs but with a cost of computational burden (as evident in Fig. 1). In addition, ViT-based models are advantageous for capturing long-range global dependencies via self-attention mechanism. Nonetheless, the quadratic time and memory complexity of self-attention operation make ViTs computationally intense and data-hungry. Accordingly, ViT-based RD models typically necessitate pretraining on large-scale datasets [38, 14, 16]. Furthermore, unlike CNNs, these models generally lack locality, since they operate at the image patch level [16].
Recent advancements in RD modeling [38, 14, 30, 16, 35] have largely revolved around the vision transformer [7] (ViT) since its debut in the 2020s. The prowess of ViT-based models in RD is primarily due to their capacity to scale effectively; their performance improves as the model size grows [21]. Consequently, ViT-based methods typically exhibit superior performance over CNNs but with a cost of computational burden (as evident in Fig. 1). In addition, ViT-based models are advantageous for capturing long-range global dependencies via self-attention mechanism. Nonetheless, the quadratic time and memory complexity of self-attention operation make ViTs computationally intense and data-hungry. Accordingly, ViT-based RD models typically necessitate pretraining on large-scale datasets [38, 14, 16]. Furthermore, unlike CNNs, these models generally lack locality, since they operate at the image patch level [16].
The Multi-Retinopathy delineates a broader subclassification of Retinopathy, introducing more precise representations of lesions. Several fundus images may carry one or multiple labels, such as asteroid hyalosis, anterior ischemic optic neuropathy, age-related macular degeneration, branch retinal vein occlusion, Choroidal folds, etc. Notably, many of these pathological changes are interrelated; for instance, the presence of cotton wool spots on the retina is a characteristic ocular manifestation of various medical conditions, including diabetes mellitus, systemic hypertension, leukemia, and AIDS [3]. CNNs remain dominant as the foundational design approach for multi-retinopathy abnormal detection. A significant portion of the benchmark methods based on CNNs originates from DR grading models. A notable example of such work is the development of CANet [15], which leverages multi-task learning to extract additional semantic information, thereby aiding the classification model. Most subsequent advancements in CNN-based methods have followed this conceptual framework [4, 15]. Contrasting with this trend, some studies argue that establishing long-range dependencies and capturing global semantic information learning is a potentially more effective strategy for advancing model capabilities. The MIL-VT introduces the ViT and incorporates multiple instance learning head to force the token to capture the lesion information [38]. However, this method processes each individual patch without emphasizing the semantics of smaller lesions, resulting in a lack of localized information modeling. Furthermore, they employed extensive external datasets for pre-training due to data-hangry nature of ViT. In contrast, SatFormer enhances the ViT framework by integrating multi-scale CNNs to detect small lesions, such as microaneurysms and exudates. This approach enriches the model’s capability to represent features of small lesions and to capture a wide range of pathological semantics [14]. This transition and amalgamation from ViT back to CNN prompt us to ponder whether CNNs are more suited for RD than ViTs or whether the potential of CNNs remains underexploited. This curiosity underpins our motivation for conducting deeper research into the CNNs in various RD tasks.
Retinal diseases (RD), such as diabetic retinopathy (DR), age-related macular degeneration, inherited retinal conditions, myopic maculopathy, and retinopathy of prematurity, are major contributors to blindness worldwide [37]. Deep neural networks, particularly convolutional neural networks (CNNs), have been extensively used in retinal image analysis over the past decades, achieving cutting-edge results in various RD-related tasks [43, 42, 15, 36, 39, 17, 27, 4, 40]. The effectiveness of CNNs in these applications is largely due to their built-in architectural inductive biases, such as spatial hierarchies, locality, and translation invariance. These characteristics enable CNNs to transform local visual elements like edges and textures into complex, high-level abstracted features. Building on this approach, numerous CNN-based RD models [36, 39, 15, 4] have incorporated disease-specific biases into their designs. However, the specialized nature of these CNN-based models for RD limits their versatility across a range of RD tasks.
To address these challenges, new iterations of ViT designs bring back convolution-like features to recover local context sensitivity [16, 20]. This adaptation is particularly beneficial for RD
D
The QuIFS algorithm applies to nonlinear systems and non-convex cost functions whenever the underlying optimal control problem admits a unique solution. It relies on coarse properties of the optimal feedback such as Lipschitz continuity, etc., rather than more detailed local structural properties; information concerning such coarse properties may be distilled directly from the problem data. This is a crucial point of departure from multiparametric techniques and deserves to be underlined. Of course, the optimal feedback is piecewise affine in the linear/affine setting under appropriate hypotheses; this important observation is now classical and follows from the central results of multiparametric programming in this context. In contrast, nonlinear models with non-convex cost functions may lead to a high degree of structural complexity, over and beyond the piecewise affine regime, of the optimal feedback, making it extremely difficult to find appropriate parametrization of such feedback maps.333In view of the current state of affairs of numerical analysis, parametrizing the optimal feedback (e.g., as along the lines of the Ritz method) does not appear to be a promising direction. Ours being an interpolation-driven technique, an approach via multiparametric programming turns out to be unnecessary in our setting; merely the ability to compute solutions to finite-horizon optimal control problems at each point of the feasible set is sufficient.
In the specific context of robust minmax MPC, offline explicit MPC techniques reported in [BBM03, GC12, PRCA06], are based on a partitioning of the state space into critical regions. These techniques cater to classes of linear controlled dynamical systems with bounded uncertainties. Most of these algorithms may fail to generate explicit control laws in situations where the prediction horizon is large. The primary reason for this is the potentially exponential growth of the number of polytopic regions with the number of constraints. We provide one such example where the explicit MPC algorithm terminated unsuccessfully due to the presence of large number of the polytopic regions when (approximate) multi-parametric programming-based tools were employed.222In contrast, our technique QuIFS produced visibly better results in terms of closeness to the online receding horizon control trajectories and the approximation quality; see Example (6.2) in §(6).
Of course, the computation of optimal policies in, e.g., ((5)) is a challenging problem. Over and above the exponential complexity introduced due to the uniform grid (pointed out in point (C) of §1), each point evaluation involves the numerical solution of a minmax problem. While the general case of nonlinear MPC offers little hope with regards to the indicated minmax computation at the present time, the linear analog (i.e., linear MPC, to be treated in §5) does indeed admit numerically tractable approaches in some of the most important cases. One of the early developments in this direction was reported in [BB07], hence the authors treated the case of the minmax problem with open-loop controls under control energy constraints and reduced it to a convex optimization program. More recently, riding on novel developments (reported in [DACC22]) on tractable techniques to solve convex semi-infinite programs, solutions to ((33)) ahead (analogs of ((5))) with polyhedral constraints under affine-feedback-in-the-noise control policies (pioneered in [Löf03b]) have been reported in [GGC23].
The complexity of the offline computations associated with QuIFS, as it stands today, is exponential in the number of states because the technique relies on a uniform grid. Recall [BMDP02, §4.4] that the complexity of standard explicit MPC for linear/affine models scales exponentially with the number of constraints in the worst case. For us, however, the complexity scales exponentially only with the state-dimension, and the number of constraints plays no role.
The industry of explicit MPC has a rich history, and we point the interested reader to the detailed survey article [AB09] for a sweeping perspective of the area. The importance of the explicit method is underscored by the fact that the online computation of receding horizon control law at each t𝑡titalic_t may be replaced by a function evaluation at each given state. This mechanism, at least in spirit, speeds up the computation of the MPC action by orders of magnitude, and primarily for this reason explicit MPC has found applications in several industrial plants; we refer the readers to [MDM12, KKK17, Ing17] for more information. Most of the techniques in explicit MPC rely essentially on multiparametric programming [BMDP02, KJP+19, KTHC15], and while exact characterizations of optimal feedbacks are available for a wide class of systems, for numerical tractability reasons most of the applicable results are limited to the linear/affine models. In this linear/affine regime, under mild hypotheses, the optimal implicit feedback turns out to be a piecewise affine mapping [BMDP02]. Several approaches to explicit MPC for nonlinear models have been developed, and “approximation” seems to be the driving force behind them; naturally, such efforts are accompanied by the key computational challenge of our times — the curse of dimensionality, and that problem persists herein. Among the vast literature on the subject, we mention the following: A binary search tree and orthogonal decomposition-based algorithm to approximate the feedback function via piecewise affine approximations was established in [Joh04] and its precursors. In [CFM09] a survey of set membership-based approximation methods for linear and nonlinear MPC problems was provided. Offline approximation of possibly discontinuous predictive laws was studied in [PFP+13]. A multiresolution wavelet-based approximation method was introduced for both linear [SJLM11] and nonlinear MPC [RRS+12] with guaranteed stability and feasibility of the feedback system; these contributions are perhaps closest to our approach although the estimates provided herein are uniform and rigorous.
C
Eqn. (6)’ row systems used the complete total loss of DEER. The ‘ℒσ=0superscriptℒ𝜎0\mathcal{L}^{\sigma}=0caligraphic_L start_POSTSUPERSCRIPT italic_σ end_POSTSUPERSCRIPT = 0’ row systems had no ℒσsuperscriptℒ𝜎\mathcal{L}^{\sigma}caligraphic_L start_POSTSUPERSCRIPT italic_σ end_POSTSUPERSCRIPT regularisation term in the total loss. The ‘ℒNLL=ℒ¯NLLsuperscriptℒNLLsuperscript¯ℒNLL\mathcal{L}^{\text{NLL}}=\bar{\mathcal{L}}^{\text{NLL}}caligraphic_L start_POSTSUPERSCRIPT NLL end_POSTSUPERSCRIPT = over¯ start_ARG caligraphic_L end_ARG start_POSTSUPERSCRIPT NLL end_POSTSUPERSCRIPT’ row systems replaced the individual human labels with ℒ¯NLLsuperscript¯ℒNLL\bar{\mathcal{L}}^{\text{NLL}}over¯ start_ARG caligraphic_L end_ARG start_POSTSUPERSCRIPT NLL end_POSTSUPERSCRIPT in the total loss.
It is common to replace such inconsistent labels with deterministic labels obtained by majority voting Busso et al. (2008, 2017) or (weighted) averages Ringeval et al. (2013); Lotfian and Busso (2019); Kossaifi et al. (2019); Grimm and Kroschel (2005). However, this causes a loss of data samples when a majority agreed emotion class doesn’t exist Majumder et al. (2018); Poria et al. (2018); Wu et al. (2021) and also ignores the discrepancies between annotators and the aleatoric uncertainty in emotion data.
was used for training the MCdp and ensemble baselines. The CCC loss was computed based on the sequence within each mini-batch of training data. The CCC loss has been shown by previous studies to improve the continuous emotion predictions compared to the RMSE loss Povolny et al. (2016); Trigeorgis et al. (2016); Le et al. (2017).
dropout Sridhar and Busso (2020b) and sequential Monte-Carlo methods Markov et al. (2015); Wu et al. (2022a).
Following prior work in continuous emotion recognition Ringeval et al. (2015, 2017); Sridhar and Busso (2020a); Leem et al. (2022), the concordance correlation coefficient (CCC) was used to evaluate the predicted mean. CCC combines
D
G𝐺Gitalic_G corresponds to an optional scaling factor and ci⁢nsubscript𝑐𝑖𝑛c_{in}italic_c start_POSTSUBSCRIPT italic_i italic_n end_POSTSUBSCRIPT represents the number of input connections.
In this work, we propose a learnable weight initialization scheme that utilizes limited available training data to learn discriminative characteristics from the volumetric medical images, which can improve the model performance without the need for any additional data or higher computation costs. Our approach uniquely leverages volumetric self-supervised tasks on the same dataset for weight initialization and segmentation tasks in medical imaging, demonstrating efficiency and efficacy.
Generally, standard data-independent weight initialization techniques are adopted for medical imaging tasks. However, medical image datasets are very different from natural image datasets with respect to the variabilities in terms of imaging modalities and anatomical structure. Also, the region of interest (tumors or any structural abnormality) is relatively rare compared to the background or normal regions in the 3D medical image scans. Hence, employing specific data-dependent weight initialization schemes tailored for medical image segmentation tasks can assist the model in learning more meaningful representations by incorporating prior knowledge about the variability in the imaging modalities and object anatomy. This approach reduces the bias towards dominant classes, ultimately enhancing the segmentation outcome.
In medical image segmentation, target organs and tissues are pixel-wise classified enabling better diagnosis, and treatment planning. Advances in deep learning methods have significantly improved medical image segmentation tasks, such as tumor [4], [13] and skin lesion [46] segmentation. Various successful convolutional neural network (CNN) models, self-attention (SA) based transformer models, and their combinations have been adapted for medical image segmentation tasks. Generally, it is necessary to have a large amount of annotated training data to achieve promising results with deep neural networks [9, 39]. However, it is a complex and expensive process to collect and annotate medical images to curate large-scale benchmark datasets. The ethical and legal constraints associated with medical data to preserve the privacy and security of sensitive patient information make the data collection and annotation tasks more challenging. Therefore, the majority of the existing medical image segmentation methods focus on improving the architecture of deep neural networks.
We observe that the choice of the initialization scheme plays an important role in network learning and can affect model convergence. For instance, Fig. 1 (left) shows that UNETR [14] converges to different solutions based on the model initialization. We can see a substantial decrease in performance for UNETR when initialized using the Kaiming approach, whereas the truncated normal approach yields improved outcomes compared to UNETR’s default initialization scheme. Using data-independent initialization schemes can likely limit the performance since medical segmentation datasets have fewer samples when compared with large-scale natural image benchmarks. Therefore, the model may struggle to learn the representations effectively during the training when the number of training samples is relatively lower with respect to the network parameters.
B
Using a device that is universally present in modern society, such as the smartphone, can be an important weapon in tackling the proliferation of Ae. aegypti mosquitoes. However, we need to overcome several practical challenges, including the limitation of existing datasets and computational constraints of devices used by the target audience.
The results of an evaluation made with mosquito wingbeat recordings (including the novel recording dataset and existing ones) provide evidence of the effectiveness, efficiency, and robustness of the proposed architecture in contrast to existing ones. The proposed neural network is 18.5% smaller in terms of the number of parameters, which is the main computational efficiency factor of the neural network heuristic. The neural network combined with a novel training technique can generate results equivalent to or superior to the state-of-the-art in accuracy, precision, recall, and F1. In addition, implementing a functional prototype provides evidence of the feasibility of running our architecture on a Asus Zenfone Max M3 Qualcomm Snapdragon 430 64-bit Octa core 4G RAM Android 8.1 and a Motorola Moto E6 Play Cortex-A53 1.5 GHz Quad Core 2 GB RAM Android 9.0 smartphones.
We divided the evaluation of the efficacy of our model into two stages. First, we consider the results predicted by the model for each of the six available datasets. In the second, we compare our model with the current state of the art considering our most challenging scenario (D⁢5𝐷5D5italic_D 5) and the dataset used in the referred work (D⁢6𝐷6D6italic_D 6).
In this work, we: (i) present a new dataset, (ii) propose a neural network topology, (iii) develop a training method as steps towards the implementation of monitoring applications of Ae. aegypti mosquitoes on low-cost mobile devices. The results of an evaluation with unpublished and pre-existing real datasets show the advance provided by our proposal towards state-of-the-art. The proposed neural network is 18.5% smaller in terms of the number of parameters, which is the main computational efficiency factor of the technique. The lean neural network combined with the improved training technique can generate results equivalent to or slightly superior to state-of-the-art in terms of accuracy, precision, recall, and F1 in all considered scenarios. In addition, implementing a functional prototype demonstrates the feasibility of running (without failures and crashes) the proposed solution on popular smartphones.
In Subsection 5.2, we present an evaluation of the effectiveness of the proposed neural network in comparison with state-of-the-art solutions. In Subsection 5.3, we present a proof of concept demonstrating the feasibility of running the proposed neural network on a low-cost smartphone.
C
The second component is pretext tasks, which is a self-supervised task that acts as an important strategy to learn data representations using pseudo-labels [17]. Pretext tasks have been summarized and categorized in the previous subsection, so repeated content will not be introduced again. The details can be found in Section 2.2.2 and Appendix  B.2.
Contrastive learning is a widely used self-supervised learning strategy, showing a strong learning ability in computer vision and natural language processing. Unlike discriminative models that learn a mapping rule to true labels and generative models that try to reconstruct inputs, contrastive-based methods aim to learn data representations by contrasting between positive and negative samples. Specifically, positive samples should have similar representations, while negative samples have different representations. Therefore, the selection of positive samples and negative samples is very important to contrastive-based methods. This section sorts out and summarizes the existing contrastive-based methods in time series modeling according to the selection of positive and negative samples. The illustration of the contrastive-based SSL for time series is shown in Fig. 3. In Appendix E.1 - E.5, the main advantages and disadvantages of five contrastive-based submethods are summarized.
The third component is model architecture, which determines how positive and negative samples are encoded during training. The major categories include end-to-end [16], memory bank [34], momentum encoder [35], and clustering [36]. More details of these four architectures are summarized in Appendix B.3.
Therefore, the first component is to construct positive and negative samples. According to the suggestions of Le-Khac et al. [29], the main methods can be divided into the following categories: multisensory signals, data augmentation, local-global consistency, and temporal consistency. Additional descriptions regarding the characteristics of these categories can be found in Appendix B.1.
This category focuses on model architectures and training objectives. The SSL methods can be roughly divided into the following categories: generative-based, contrastive-based, and adversarial-based methods. The characteristics and descriptions of the above methods can be found in Appendix A. Using the learning paradigm as a taxonomy is arguably the most popular among the existing SSL surveys, including [22, 20, 23, 24, 25, 26, 27]. However, not all surveys cover the above three categories. The readers are referred to these surveys for more details. In Table I, we also provide the data modalities involved in each survey, which can help readers quickly find the research work closely related to them.
B
The two TV-based models mentioned above penalize the gradient magnitude, which is completely localized. To address this limitation, Lefkimatis et al. proposed structure tensor total variation (STV)[22], where the information available in the local neighborhood of each point in the image domain is taken into account. Then the structure tensor square root of the eigenvalues of the image is penalized. Therefore, the obtained regularizer exhibits semi-local behavior.
We clarify that the symbol 𝒖𝒖\boldsymbol{u}bold_italic_u refers to the vector-valued image beyond this subsection.
where ∇𝒖1⁢(i)∇subscript𝒖1𝑖\nabla\boldsymbol{u}_{1}(i)∇ bold_italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_i ) represents discrete gradient[26]. Define the structure tensor of image 𝒖𝒖\boldsymbol{u}bold_italic_u at pixel i𝑖iitalic_i as
Any pixel i𝑖{i}italic_i of a vector-valued image 𝒖𝒖\boldsymbol{u}bold_italic_u has a Jacobian matrix, which is defined as
\boldsymbol{u}_{i,s}^{2}}∥ ∇ bold_italic_u ∥ start_POSTSUBSCRIPT 2 , 1 end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N italic_M end_POSTSUPERSCRIPT square-root start_ARG ∑ start_POSTSUBSCRIPT italic_s = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ∇ bold_italic_u start_POSTSUBSCRIPT italic_i , italic_s end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG since 𝒖𝒖\boldsymbol{u}bold_italic_u has been defined as a vector.
C
Consider a switching signal σ∈𝒮ℛ′𝜎subscriptsuperscript𝒮′ℛ\sigma\in\mathcal{S}^{\prime}_{\mathcal{R}}italic_σ ∈ caligraphic_S start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT caligraphic_R end_POSTSUBSCRIPT. Let Assumptions 1-4 hold. We will show that the switched system (1) is IOSS under σ𝜎\sigmaitalic_σ.
all}\>t\in[0,+\infty[.italic_w ( italic_t ) ⩾ roman_exp ( roman_Γ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( 0 , italic_t ) ) italic_w start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ⩾ 0 for all italic_t ∈ [ 0 , + ∞ [ .
For any interval I⊆[0,∞[I\subseteq[0,\infty[italic_I ⊆ [ 0 , ∞ [, ∥⋅∥Isubscriptdelimited-∥∥⋅𝐼\left\lVert{\cdot}\right\rVert_{I}∥ ⋅ ∥ start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT is the essential supremum norm of a map from I𝐼Iitalic_I into some Euclidean space.
For a time interval ]s,t]⊆[0,+∞[]s,t]\subseteq[0,+\infty[] italic_s , italic_t ] ⊆ [ 0 , + ∞ [, we define
all}\>]s,t]\subseteq[0,+\infty[roman_T start_POSTSUPERSCRIPT roman_S end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ( italic_s , italic_t ) = roman_T start_POSTSUPERSCRIPT roman_S end_POSTSUPERSCRIPT ( italic_s , italic_t ) and roman_T start_POSTSUPERSCRIPT roman_U end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ( italic_s , italic_t ) = roman_T start_POSTSUPERSCRIPT roman_U end_POSTSUPERSCRIPT ( italic_s , italic_t ) for all ] italic_s , italic_t ] ⊆ [ 0 , + ∞ [.
C
Be that as it may, the question of further relaxing the restriction p≥m𝑝𝑚p\geq mitalic_p ≥ italic_m to accommodate the matrices with p<m𝑝𝑚p<mitalic_p < italic_m (i.e., the noise-only sample covariance matrix is also rank deficient) seems to be an important problem. Consequently, the noise-only sample covariance matrix becomes non-invertible and therefore, as per the literature,
Under the Gaussian assumption with n<m𝑛𝑚n<mitalic_n < italic_m, the largest generalized sample eigenvalue based detection in colored noise amounts to finite dimensional statistical characterization of the largest eigenvalue of complex correlated singular F𝐹Fitalic_F-matrix. In this respect, the joint eigenvalue density of the uncorrelated real singular F𝐹Fitalic_F-matrix has been derived in [55, 71, 57, 72]. The joint eigenvalue density of
Therefore, in this paper, capitalizing on powerful contour integral approach due to [73] and orthogonal polynomial techniques due to [74], we derive simple and tractable closed-form expressions for the joint eigenvalue density and the cumulative distribution function (c.d.f.) of the maximum generalized eigenvalue of the complex correlated singular F𝐹Fitalic_F-matrix when the underlying covariance matrix assumes a single spiked structure. The resultant c.d.f. expression consists of a determinant of a square matrix whose dimensions depend on the relative difference between the number of noise only samples p𝑝pitalic_p and the system dimensionality m𝑚mitalic_m (i.e., p−m𝑝𝑚p-mitalic_p - italic_m) but not their individual magnitudes. This key feature further enables us to bypass the determinant evaluation process in expressing the c.d.f. corresponding to an important configuration p=m𝑝𝑚p=mitalic_p = italic_m. Since the parameter p−m𝑝𝑚p-mitalic_p - italic_m can also be used as an implicit indicator of the quality of 𝚺^nsubscript^𝚺𝑛\widehat{\boldsymbol{\Sigma}}_{n}over^ start_ARG bold_Σ end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT as an estimator of the unknown population noise covariance matrix, the above configuration corresponds to the lowest quality noise covariance estimator. Therefore, this configuration in turn dictates a performance lower bound on the leading eigenvalue as a test statistic. Apart from these developments, this new c.d.f. expression further facilitates the analysis of the receiver operating characteristics (ROC) of the largest root test in the sample deficient regime.
Moore-Penrose inverse can be used instead. In this respect, the distributional properties of the largest generalized eigenvalue under the null has been established in the literature. However, the statistical characterization under the alternative remains an important open problem.
This paper investigates the detection problem in colored noise using the largest generalized eigenvalue of whitened signal-plus-noise sample covariance matrix (a.k.a. F𝐹Fitalic_F-matrix) as the test statistic. This F𝐹Fitalic_F-matrix is endowed with a rank-one spiked underlying covariance structure due to the cumulative effects of whitening and the nature of the detection problem. Our specific focus is on the sample deficient regime in which the number of signal-plus-noise observations is strictly less than the system dimension (i.e., n<m𝑛𝑚n<mitalic_n < italic_m). In this regime, the corresponding F𝐹Fitalic_F matrix degenerates into a singular matrix. Therefore, we have assessed the performance of this detector by developing a new exact closed-form expression for the c.d.f. of the largest generalized eigenvalue of a complex correlated singular F𝐹Fitalic_F-matrix when the underlying covariance assumes a rank-one spiked structure. An exact functional relationship between the detection probability and the false alarm rate (i.e., ROC profile) does not seem to be feasible for a general system configuration. However, when the noise-only sample covariance matrix is nearly rank deficient (i.e., p=m𝑝𝑚p=mitalic_p = italic_m) such an explicit functional relationship has been obtained. This is one of the consequences of the powerful orthogonal polynomial approach that we have utilized in deriving the novel c.d.f.
C
When doing 40-way few-shot classification, we achieve a performance of 23.8% (this is a much more difficult task than the five-way setting).
To see what happens on the per-keyword level when MattNet is trained and tested on 40 few-shot classes, Fig. 6 shows the individual scores.
For classification (Table VII), we see that this approach actually improves performance over MattNet (line 1). This makes sense since the model is now fine-tuned exclusively on (mined) few-shot classes. But it also illustrates that pretraining on background data is essential. For retrieval (Table VI), we see an expected drop in performance comparing lines 1 and 3, because the latter model is not trained to distinguish between few-shot and non-few-shot classes.
When training MattNet on the 40 classes, performance on the same few-shot retrieval task for the original five classes drops marginally from 40.3% to 37.1%.
TABLE II: P𝑃Pitalic_P@N few-shot retrieval scores (%) on the five few-shot classes. K𝐾Kitalic_K is the number of support-set examples per class.
A
BMAD integrates fifteen SOTA anomaly detection algorithms, among which four are reconstruction-based methods and the rest eleven are feature embedding-based approaches. Among the reconstruction-based methods, AnoGAN [54] and f-AnoGAN [55] exploit the GAN architecture to generate normal samples. DRAEM [73] adopts an encoder-decoder architecture for abnormality inpainting. Then a binary classifier takes the original data and the inpainting result as input for anomaly identification. UTRAD [15]treated the deep pre-trained features as dispersed word tokens and construct an autoencoder with transformer blocks. Among the projection-based methods, DeepSVDD  [50], CutPaste [33] and SimpleNet [39] are rooted in one-class classification. DeepSVDD searches a smallest hyper-sphere to enclose all normal embeddings extraced from a pre-tarined model. CutPaste and SimpleNet introduce abnormality synthesis algorithms to extend the one-class classification, where generated abnormality synthesis is taken as negative samples in model training. Motivated by the paradigm of knowledge distillation, MKD  [53] and STFPM  [67] leverage multi-scale feature discrepancy between the teacher-student pair for AD. Instead of adopting the similar backbones for the T-S pair in knowledge distillation, RD4AD  [19] introduced a novel architecture consisting of a teacher encoder and a student decoder, which significantly enlarges the representation dissimilarity for anomaly samples. All of PaDiM  [17], PatchCore  [46] and CFA  [31] rely on a memory bank to store normal prototypes. Specifically, PaDiM utilizes a pre-trained model for feature extraction and models the obtained features using a Gaussion distribution. PatchCore leverages core-set sampling to construct a memory bank and adopts the nearest neighbor search to vote for a normal or abnormal prediction. CFA improves upon PatchCore by creating the memory bank based on the distribution of image features on a hyper-sphere. As notable from the name, CFlow  [22] and CS-Flow  [48] are flow-based methods. The former introduced positional encoding in conjunction with a normalizing flow module and the latter incorporates multi-scale features for distribution estimation.
Table 3: Anomaly detection performance quantified by DICE over BMAD. The top method for each metric are underlined. Note that Dice is a threshold-dependent metric. The results in the table is obtained with threshold of 0.5. By adjusting the threshold for each result, it is possible to achieve higher performance.
Figure 1: Diagram of the BMAD benchmarks. BMAD includes six datasets from five different domains for medical anomaly detection, among which three support pixel-level AD evaluation and the other three for sample-level assessment only. BMAD provides a well-structured and easy-used code base, integrating fifteen SOTA anomaly detection algorithms and three evaluation metrics.
Table 2: AD performance (mean+STD) comparison over the benchmarks in BMAD. The results are obtained from five repetitions of the experiment. NA indicates that a method doesn’t support anomaly localization. The top three methods for each metric are underlined.
Figure 2: Visualization examples of anomaly localization on the three benchmarks that support pixel-level AD assessment.
C
However, all these works focus on single-phase and multi-phase PDNets without considering low-voltage secondary distribution networks (SDNets). Meanwhile, recent years have witnessed the increasing proliferation of loads and DERs connected to SDNets. In previous distribution network OPF works, loads and DERs in SDNets are aggregated at the PDNet bus, neglecting the power flow in SDNets with service transformers. In U.S., loads and DERs in SDNets are connected to the PDNet via center-tapped service transformers. Such an aggregation could introduce errors in the comprehensive distribution network analysis, lacking empirical fidelity. Without explicitly considering SDNets in OPF, the SDNet operating status cannot be accurately estimated. For instance, those works cannot detect and analyze over- and under-voltage problems in SDNets. Moreover, with the increasing proliferation of grid-edge resources, managing the voltage quality in SDNets is more challenging.
To meet this gap, we propose an integrated primary-secondary distribution network OPF model, where service transformers and triplex service lines at the SDNet are carefully considered. This OPF model enables considering distributed energy resources at both the primary and secondary distribution networks, thus bridging the gap between the PDNet optimization and SDNet optimization with increasing empirical fidelity. Compared with existing studies, the main contributions of this study are as follows:
This paper proposes an optimal power flow for integrated primary-secondary distribution networks with service transformers. Instead of neglecting SDNets with service transformers, we model SDNets, including service transformers and triplex service lines, in the integrated primary-secondary distribution network OPF. To be more specific, the SOCP relaxation and linearization for the service transformer power flow, i.e., SOCP-ST and L-ST, are proposed, and the linearized power flow for triplex service lines, i.e., L-TSL/ C-L-TSL, is developed. Numerical studies are performed to show the effectiveness and superiority of our proposed models.
However, all these works focus on single-phase and multi-phase PDNets without considering low-voltage secondary distribution networks (SDNets). Meanwhile, recent years have witnessed the increasing proliferation of loads and DERs connected to SDNets. In previous distribution network OPF works, loads and DERs in SDNets are aggregated at the PDNet bus, neglecting the power flow in SDNets with service transformers. In U.S., loads and DERs in SDNets are connected to the PDNet via center-tapped service transformers. Such an aggregation could introduce errors in the comprehensive distribution network analysis, lacking empirical fidelity. Without explicitly considering SDNets in OPF, the SDNet operating status cannot be accurately estimated. For instance, those works cannot detect and analyze over- and under-voltage problems in SDNets. Moreover, with the increasing proliferation of grid-edge resources, managing the voltage quality in SDNets is more challenging.
Therefore, considering SDNet power flow in OPF is extremely vital for performing trustworthy, reliable, and practical distribution network analysis. There have been several works considering SDNets in distribution networks. The works [20, 21] demonstrate service transformer and triplex service line modeling in SDNets. The article [22] surveys various technical requirements for integration of the roof-top PV into the existing low voltage distribution network. In [23], researchers propose a hierarchical multilevel optimal power flow to minimize power losses in integrated primary-secondary distribution networks. But it does not provide a comprehensive treatment of the power flow in SDNets with service transformers. The work [24] proposes distributed optimal conservation voltage reduction for integrated primary-secondary distribution networks. However, this work simply regards a SDNet as a prosumer without considering service line segment and service transformer modeling, as well as power flow constraints. Service transformer and triplex service line power flow constraints in SDNets are non-linear and non-convex, resulting in challenges and difficulties in OPF solutions. The problem of incorporating service transformers and triplex service lines in SDNets into generalized OPF problems remains unresolved.
D
FSR and PEF information is transferred to the MCU using an inter-integrated circuit (I2C) with the intermediate assistance of ADS1015.
Converting analog signals to digital makes the system robust to subtle movements/motion artifacts, and reduces the signal’s sensitivity to the distance between the sensor position and the MCU.
Real-Time and on-the-Edge Flow Diagram Implementation for the Facial Expressions Scenario with Motion Threshold Detection and Two Stages Hierarchical Modeling; The First Stage is the MMG-Model to detect Null/Activity. The Second Stage is the Inertial-based Model to Classify the Facial Movements Dictionary in Fig. 1 (B).
The FSR and Piezo sensors are also straightforward to integrate due to their flexible design and minimal circuit requirement with only an analog to digital input constraint per sensor.
Besides, it is also easier to add slaves to an I2C bus compared to adding more analog channels to the system.
A
Early prediction involves leveraging the initial seismic waveforms received by proximal seismic stations, extracting critical earthquake features from these initial waveforms, and using these features to forecast the seismic intensities at an array of stations spanning the affected region. Informed by the limitations of existing EEW systems and seismic intensity prediction algorithms, we present an approach that utilizes a relatively small segment of initial seismic waveforms from multiple seismic stations dispersed across a geographically sparse region. We aim to accurately predict seismic intensity at these stations and others in the surrounding area where the seismic waves have not yet arrived [27]. Recognizing the graph-like structure of seismic station distribution and their interaction with seismic wavefield propagation, we employ a graph neural network (GNN) as the foundation of our approach [28, 29]. The unique power of GNNs lies in their ability to propagate information through the nodes of a graph, allowing us to predict seismic intensity at distant stations by exploiting a fraction of the information gathered at the early-receiving stations. In essence, GNNs enable us to make globally informed predictions with locally available data. Most importantly, to address the need for shorter time-window predictions, we incorporate self-supervised contrastive learning, enabling the model to be trained using larger time windows while making predictions using shorter ones [30, 31]. This integration of contrastive learning and specialized GNN layers results in an effective and efficient approach that can perform on significantly shorter time windows. Moreover, the inherent self-supervised nature of this contrastive learning approach eliminates the necessity for exhaustive labeling of the input data. We aptly name our proposed model as Seismic Contrastive GNN (SC-GNN).
In this section, we assess the performance of our proposed SC-GNN model for real-time seismic intensity prediction using three real-world seismic datasets. Additionally, we compare the effectiveness of our proposed model against several state-of-the-art baseline models by examining standard performance metrics.
We have demonstrated the efficacy of our approach through a comprehensive series of experiments conducted using three real-world seismic datasets. Experimental results substantiate that our approach consistently surpasses the performance of state-of-the-art techniques across a broad spectrum of evaluation metrics. In particular, on our principal dataset, our SC-GNN model demonstrates substantial improvement with a mean squared error (MSE) of 0.4172, reflecting an approximately 234% enhancement over the best-performing state-of-the-art GCN model. Additionally, our model maintains the lowest standard deviation of error of 0.61 and attains the highest correlation coefficient, indicating robustness, reliability, and a strong positive relationship between predicted and actual values. As the input time window diminishes, our model’s performance remains consistently superior to the baseline models, underlining its capability to handle variable input sizes efficiently.
As the input time window is reduced, the performance of all models degrades, indicated by an increase in the MSE. However, the proposed SC-GNN model consistently outperforms the baseline TISER-GCN and CNN models across all input time windows. The SC-GNN model maintains a significantly lower MSE compared to the baselines, demonstrating its robustness and effectiveness in handling varying input sizes. Furthermore, the deterioration in the performance of the baseline models occurs much faster compared to our proposed SC-GNN when the input time window is shortened. Notably, even when using a 5s window input, the SC-GNN model demonstrates a remarkable 143% improvement in performance compared to the next best-performing model, TISER-GCN, with a 10s input window.
The proposed model, SC-GNN, outperforms the baseline models across all metrics in the CI dataset. Our SC-GNN model achieves the lowest MSE of 0.4172, which reflects around 234% improvement over the state-of-the-art best-forming TISER-GCN model, indicating more accurate predictions. The SD of the error for the GNN model is the lowest at 0.6111, suggesting more consistent and reliable predictions than the other models. Furthermore, the SC-GNN model has the highest CC of 83.94%, signifying a strong positive relationship between the predicted and actual values.
B
Our main contributions are therefore: i) subjective evaluation of the numerical optimiser for the λ𝜆\lambdaitalic_λ multiplier using seven 4K HDR videos for AV1, ii) a statistical analysis of results regarding expert VS non-experts in HDR studies iii) evidence for the correlation of subjective scores with current HDR metrics and the impact of film grain in HDR quality.
(0.96) and HDR-VQM (0.93). This could be an indication that faster perceptual metrics even without HDR features (like MS-SSIM) may aid the per-clip optimisation process. Finally, we note that the experts MOS scores indicate a higher correlation to the objective metrics compared to the naive viewers, as anticipated from the analysis earlier.
We observed gains of 5.19% with 4.68% bitrate savings on average (up to 14.78%). Objective metrics in terms of SDR and HDR metrics have been analysed showing gains between 1.08% and 6.08%. Subjective analysis shows the perceived quality between experts and non-experts varies significantly over objective metrics when the video source has ISO noise. Due to the high variance in the subject responses, our experiments were not able to categorically say whether the proposed method yields perceivably better results than the one produced by the default encoder settings.
Our main contributions are therefore: i) subjective evaluation of the numerical optimiser for the λ𝜆\lambdaitalic_λ multiplier using seven 4K HDR videos for AV1, ii) a statistical analysis of results regarding expert VS non-experts in HDR studies iii) evidence for the correlation of subjective scores with current HDR metrics and the impact of film grain in HDR quality.
We are also releasing our dataset and MOS scores [8] to support HDR video quality metric development by the research community.
D
We trained two CNN models, one with grey-scaled inputs and the other one with RGB inputs to ensure that there was no color sensitivity when modeling with CNNs. The number of parameters for the two CNNs and FIN ensemble was 149M and 147M respectively. We shuffled and regrouped 90% of the data for model training, and the remaining 10% of the data was used for validation. All models were assessed according to the mean and standard deviation of their F-1 score, accuracy, and the number of epochs until convergence of the validation loss.
In Table I we compare the performance of the three models with respect to the mean AUROC across the ten testing folds, the standard deviation of the test set performance, and the number of epochs required for the models to converge. As seen in the table, the SVM had the lowest overall AUROC (0.611) of the four models, the DFNN using the raw radiomics features had the lowest AUROC (0.667) of the three Deep Learning network approaches. The mean AUROC of the FIN-embedded model was only slightly better than the CNN model (0.998 and 0.995 respectively); however, the standard deviation of the FIN-embedded model was 42% lower than the CNN approach and 95% lower than the DFNN. Importantly, the FIN-embedded model required the fewest number of epochs before convergence; the FIN-embedded model converged 20% faster than the CNN, and faster 17% than the DFNN. These results imply that FIN-embedded models provide enhanced classification performance, which is more robust and faster to train than the alternative approaches.
In Table II, we compare the performance of our proposed approach with the baseline models. We observed that the CNN model was more sensitive to grey-scaled inputs as demonstrated by a higher average accuracy and F-1 score, and lower standard deviation, relative to the RGB model. Importantly, the FINs ensemble outperformed both the RGB and grey-scaled CNN models: the FINs ensemble had a higher F-1 score and accuracy, with a 39% lower standard deviation for the F-1 score and 33% lower standard deviation for accuracy than the best performing baseline model (grey-scaled CNN). The three models all converged at similar epochs in terms of the validation loss.
We trained three multiclass classification models with the same structure as the baseline CNN and FINs in experiment I (Section III-A) using the collected data: (1) a CNN model with an embedded FIN ensemble that imitated the six radiomics features described in section II, (2) a baseline CNN model with RGB image inputs, and (3) a baseline CNN model with grey-scaled image inputs.
We trained two CNN models, one with grey-scaled inputs and the other one with RGB inputs to ensure that there was no color sensitivity when modeling with CNNs. The number of parameters for the two CNNs and FIN ensemble was 149M and 147M respectively. We shuffled and regrouped 90% of the data for model training, and the remaining 10% of the data was used for validation. All models were assessed according to the mean and standard deviation of their F-1 score, accuracy, and the number of epochs until convergence of the validation loss.
B
The design of the transmit beampattern of an antenna array is a classical problem that has already been studied in the context of MIMO radars, either for narrowband [40, 41, 42, 43, 44] or broadband [45, 46, 47, 48] array configurations, and in biomedical imaging [49]: in these cases, the probing waveforms are designed so that the realized beampattern matches, in a least-square sense, the desired one. This problem has also been addressed in DFRC systems [8, 50, 51, 52, 53], but, in the latter case, only the radar beampattern has been designed. To the best of the authors’ knowledge, the beampattern design problem for the considered RIS-based transmit architecture has been considered only in [54] for a narrowband system with a single source and with no waveform design. In this context, leveraging the preliminary results reported in [55], we make the following contributions.222The present contribution differs from [55] in several respects: the system description is enriched with more details and with a critical discussion on the narrowband and broadband operating conditions; the additional constant-modulus constraint is considered; the analytical derivation of all proposed solutions is given; and, finally, a richer numerical analysis is provided.
where Gi⁢j⁢(f)subscript𝐺𝑖𝑗𝑓G_{ij}(f)italic_G start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ( italic_f ) is the frequency response of the channel between source j𝑗jitalic_j and element i𝑖iitalic_i of the RIS, xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT models the frequency response of the RIS, and the remaining terms represent the frequency response of the channel between element i𝑖iitalic_i of the RIS and the observation point; in particular, Γi⁢(θ,φ)subscriptΓ𝑖𝜃𝜑\Gamma_{i}(\theta,\varphi)roman_Γ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_θ , italic_φ ) is the amplitude beampattern of element i𝑖iitalic_i of the RIS in the direction (θ,φ)𝜃𝜑(\theta,\varphi)( italic_θ , italic_φ ), 4⁢π⁢r24𝜋superscript𝑟2\sqrt{4\pi r^{2}}square-root start_ARG 4 italic_π italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG is the term due to the free-space attenuation from the RIS to the observation point, and r/c+τi⁢(θ,φ)𝑟𝑐subscript𝜏𝑖𝜃𝜑r/c+\tau_{i}(\theta,\varphi)italic_r / italic_c + italic_τ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_θ , italic_φ ) is the corresponding propagation delay, with c𝑐citalic_c denoting the speed of light and [63, Eq. (2.16)]
We develop a detailed model for the far-field signal of the RIS-based transmit architecture, eliciting the effects of the waveforms emitted by the active sources, the RIS adjustable phases, and the source-RIS channels.
We discuss the cases when this architecture is narrowband or broadband, eliciting the impact of the RIS size, location of the sources, and signal bandwidth.
Consider a transmit architecture composed of an illuminator with J𝐽Jitalic_J elements (also called sources) and a passive RIS with M𝑀Mitalic_M elements, as shown in Fig. 1. The carrier frequency is fcsubscript𝑓𝑐f_{c}italic_f start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT, and the lowpass signal emitted by the j𝑗jitalic_j-th source of the illuminator is denoted sj⁢(t)subscript𝑠𝑗𝑡s_{j}(t)italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_t ): it is a waveform with support included in the interval [0,T]0𝑇[0,T][ 0 , italic_T ] and Fourier transform approximately equal to zero outside the interval [−W/2,W/2]𝑊2𝑊2[-W/2,W/2][ - italic_W / 2 , italic_W / 2 ]. The i𝑖iitalic_i-th element of the RIS is located at position 𝒑isubscript𝒑𝑖\bm{p}_{i}bold_italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and introduces a (controllable) phase shift that is modeled by333We are assuming here that W𝑊Witalic_W is sufficiently small that the frequency response of the RIS can be considered constant in [−W/2,W/2]𝑊2𝑊2[-W/2,W/2][ - italic_W / 2 , italic_W / 2 ]. the unit-modulus complex value xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. The global reference system is located at the center of gravity of the RIS, so that ∑i=1M𝒑i=[000]𝖳superscriptsubscript𝑖1𝑀subscript𝒑𝑖superscriptmatrix000𝖳\sum_{i=1}^{M}\bm{p}_{i}=\begin{bmatrix}0&0&0\end{bmatrix}^{\mathsf{T}}∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT bold_italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = [ start_ARG start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW end_ARG ] start_POSTSUPERSCRIPT sansserif_T end_POSTSUPERSCRIPT. The frequency response of the channel linking the j𝑗jitalic_j-th source, the i𝑖iitalic_i-th element of the RIS, and the point (r,θ,φ)𝑟𝜃𝜑(r,\theta,\varphi)( italic_r , italic_θ , italic_φ ) in the far-field region is444We are assuming here that the possible coupling effects among the element of the RIS and the sources can be neglected.
B
Besides designing computationally efficient algorithms for graph matching, another critical problem is determining when finding a good matching is possible at all. The authors in [GMP_relax] studied correlated random Bernoulli graphs and found that the convex relaxation method works only if the correlation between two graphs is sufficiently large. Similarly, [GMP_theory3, GMP_theory2, GMP_theory4, GMP_theory1] studied the condition of successful recovery from an information-theoretical perspective and proved the existence of a sharp phase transition in the recovery of the correct permutations for Gaussian models and Erdös-Rényi (ER) random graphs. An algorithm that approaches the transiting threshold has been proposed in [GMP_theory5].
Specifically, spectral graph matching finds proper representations of graphs in the eigenspaces of adjacency or Laplacian matrices, simplifying the original NP-hard combinatorial search problem into a more tractable form [GMP_eign1]. The author in [GMP_eign1] formulated the problem of exact graph matching as finding a permutation between adjacency matrices. It is shown that the optimal permutation can be obtained by first computing the eigendecomposition of adjacency matrices and then solving a bipartite maximum weighted matching problem. The work in [GMP_eign3] further extended the method in [GMP_eign1] to handle inexact matching of two graphs with different sizes by choosing the top eigenvalues as the projection space. Another extension of [GMP_eign1] is presented in [GMP_eign2], which considered the eigendecomposition of Laplacian matrices and used eigenvector histograms for alignment. The framework in [GMP_eign2] was further extended in [GMP_eigen5] introducing a local node similarity measure; in the paper, the spectral information on Laplacian matrices is referred to as the global node similarity. Moreover, [GMP_eigen6] proposed a multi-resolution spectral method. More recently, [GMP_eigen7] proposed a pairwise eigenvector alignment method that was reported to be robust to sign ambiguity and eigenvalue multiplicity.
We conduct simulations on both synthetic data and real-world datasets to verify the efficiency of the proposed method. The results demonstrate that our method is more robust against errors and achieves more accurate matching compared to the heuristic combination of graph topology inference and graph matching.
Numerical experiments in [GMP_sym2] reported that large ER random graphs have a very high probability of being asymmetric. Additionally, [GMP_sym2] identified that symmetric graphs have two or more subgraphs with the same inner structure and outer connections.
For a more general setup, it has been recognized that graphs with symmetrical structures, such as cycles, do not have a unique matching [GMP_relax2, GMP_sym2]. If symmetries exist, more than one permutation leads to an equally good matching; thus, the optimal one is difficult to identify.
D
Without TR: SSDiffRecon without cross-attention transformer layers is trained and tested. This model only consists of data-consistency and CNN layers. Other than the network, training and inference procedures are not changed.
Without DC: SSDiffRecon without the data-consistency layers is trained and tested. This model does not utilize data-consistency but the other training and inference details are the same as the SSDiffRecon.
Supervised: Supervised training of SSDiffRecon using paired under- and fully sampled MR images and pixel-wise loss is performed. Other than training, inference sampling procedures are the same as the SSDiffRecon.
UNET: Original UNET architecture in DDPM [13] is trained with the same self-supervised loss as in SSDiffRecon. Other than the denoising network architecture, the training and inference procedures are not changed.
Without TR: SSDiffRecon without cross-attention transformer layers is trained and tested. This model only consists of data-consistency and CNN layers. Other than the network, training and inference procedures are not changed.
A
In Section 6, we outline the key challenges and research opportunities for the modeling community in implementing circular systems engineering.
The most commonly used view of sustainability originates from Brundland Brundtland (1987) who defines sustainability as the capacity to “meet the needs of the present without compromising the ability of future generations to meet their own needs”. Brundland differentiates between three aspects of sustainability: economic (financial viability), environmental (reduced ecological impact, e.g., waste), and societal (elevated utility for society and the human). In an effort to adopt sustainability principles for software-intensive and technological systems, Penzenstadler and Femmer (2013) extend these aspects with a fourth one: technical sustainability, which describes the ability of a system to be used over a prolonged period. A similar notion of sustainability is voiced by Hilty et al. (2006) who define sustainability as the capacity to “preserve the function of a system over an extended period of time”.
The steadily accelerating innovation pathways of humankind have rendered prevailing systems engineering paradigms unsustainable. By Brundtland’s classic definition of sustainability (Brundtland, 1987), systems engineering falls short of “meeting the needs of the present without compromising the ability of future generations to meet their own needs”.
Second, along with the sustainability of the engineered system, the sustainability of the employed engineering methods is equally important. This is the principle of bipartite sustainability (Section 4.2).
By the terms of the four essential sustainability dimensions of Penzenstadler and Femmer (2013), prevalent systems engineering practices fail to fulfill important technical (long-term usage), economic (financial viability), environmental (reduced impact), and social (elevated utility) sustainability principles.
A
Note that (P3) can be solved by the exhaustive search method. Since in practical systems, the number of control bits k𝑘kitalic_k is generally not greater than 3, the complexity of this method is not exceptionally high.
Since the distances between the BS and the RIS, as well as the RIS and the user, are significantly greater than the distances between any two RIS elements, we assume that the path loss of the BS-RIS link and the RIS-user link via different RIS elements is identical. The reflected LoS components of each channel via the m𝑚mitalic_m-th RIS element are denoted by [28]
Moreover, since each RIS element can generate the same patterns of reflection coefficients, any RIS element with the identical expected phase shift has the same optimal reflection coefficient when solving the problem (P3).
which can be thought of as the difference between the desired reflection coefficient and the quantified reflection coefficient projected onto it. Here, the desired reflection coefficient denotes the reflection coefficient that maximizes the LARP without any constraints, with a reflection amplitude Am=1subscript𝐴𝑚1A_{m}=1italic_A start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = 1 and a phase shift θm∗subscriptsuperscript𝜃𝑚\theta^{*}_{m}italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT given in (18). We will refer to θm∗subscriptsuperscript𝜃𝑚\theta^{*}_{m}italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT as the expected phase shift in the remainder of this paper. The optimization problem of the reflection coefficient of the m𝑚mitalic_m-th element is then simplified to (P3):
Therefore, we may build a look-up table by calculating the expected phase shift range cisubscript𝑐𝑖c_{i}italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT for each quantized reflection coefficient. This table provides the optimized reflection coefficients for any possible value of expected phase shifts. In other words, the reflection coefficients are quantized using the look-up table, which will further reduce the computational complexity of solving (P3).
B
\parencitelindsey2016improved,erera2013improved,powell1986local, and greedy algorithms \parenciteulch2022greedy.
terminal a𝑎aitalic_a has minimum flow diversion cost (daksubscriptsuperscript𝑑𝑘𝑎d^{k}_{a}italic_d start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT) for the
where different problems were classified as SND problems including flow planning problems, load planning problems, routing and dispatching problems, driver and fleet management problems, and vehicle routing and scheduling problems ([bakir2021motor]).
Traditional flow and load planning problems generate a single, primary (flow) path for each package. [baubaid2021value] study the value of
Specifically, this research bridges the gap between tactical flow and load planning \parencitebakir2021motor and operational execution \parenciteherszterg2022near, baubaid2023dynamic. The goal here is to efficiently and effectively adjust the existing load plan as more accurate package volume forecast is available; this problem is mentioned as an interesting and useful future research direction by [lindsey2016improved]. The flexibility to adjust the load plans enables terminal planners to better manage daily operations while maintaining service guarantees.
C
To gather the capacitive channel data for the machine learning training, we created a JavaScript application to connect with the DAU via Bluetooth Low Energy.
This process shares similarities with a High-Pass filter in order to resolve long-term signal drift and operation frequency while maintaining the signal features of foldable structure movement.
From capacitive sensing to track the folding process (C2F), through data-driven machine learning and 3D reconstruction.
The user interface provides raw signal visualizations to monitor the data collection process and features to pack the acquired data into files.
The FDC2214 utilized in the DAU generates raw capacitive data as frequencies, which was around 13.7 MHz.
C
In this subsection, we evaluate the prosody transfer performance of our model by transferring the emotional styles from the ESD dataset (Zhou et al., 2021) to speakers in the LibriSpeech test-clean dataset. We randomly choose 20 speakers from the LibriSpeech test-clean set and choose 50 sentences for each of them. Then, we randomly select an emotional speech clip from the ESD dataset for each of the sentences in the LibriSpeech test-clean set and use the selected emotional speech as the prosodic reference. We keep the reference speeches consistent among different models to exclude other interference factors.
Skerry-Ryan et al. (2018) first integrate a prosody reference encoder into a TTS system based on Tacotron (Wang et al., 2017), which is capable of performing similar-text prosody transfer. Recent works try to transfer prosody in different-text and different-speaker settings (Karlapati et al., 2020; Zaıdi et al., 2021) with the bottleneck of the prosody encoder. Among them, Daft-Exprt (Zaıdi et al., 2021) uses a gradient reversal layer to penalize the prosody encoder if its output contains information about the speaker identity from the reference utterance, which enhances the target speaker fidelity for cross-speaker prosody transfer. However, as pointed out by Sigurgeirsson & King (2023), current solutions do not learn a transferable representation of prosody, but rather an utterance-level representation that is relatively dependent on both the reference speaker and reference text.
We compare the prosody transfer performance of Mega-TTS 2 with two systems, including: 1) CopyCat (Karlapati et al., 2020), a model that utilizes a reference encoder architecture capable of capturing temporal prosodic representations; 2) Daft-Exprt (Zaıdi et al., 2021), a model disentangles identity and prosodic information through an adversarial training strategy that enables accurate prosody transfer across speakers. To make fair comparisons, we incorporate the techniques for prosody transfer from CopyCat and Daft-Exprt to the baseline system proposed in the previous subsection and scale up the model capacity to ensure that all models have a comparable number of parameters. All of the systems in this experiment are pre-trained on the LibriLight dataset.
We compare the zero-shot speech synthesis performance of Mega-TTS 2 with two systems, including: 1) VALL-E (zero-shot) (Wang et al., 2023), a large-scale zero-shot TTS model using large language models to generate discrete speech codes. Since VALL-E has not been open-sourced yet, we carefully implement it for optimal performance; 2) Baseline (fine-tune), a model that incorporates the GAN used in our Mega-TTS 2 to the FastSpeech 2 backbone (Ren et al., 2020). To make the baseline support adaptive scenarios, we use the powerful speaker encoder from Meta-StyleSpeech (Min et al., 2021) to extract timbre information. We carefully fine-tune the baseline system for 2,000 steps to reach an optimal balance between WER and SIM. Note that all of the systems in this experiment are pre-trained on the LibriLight dataset. We provide further explanation for the selection of the baseline systems in Appendix A.7.
Table 2 demonstrates that compared with CopyCat and Daft-Exprt, the moments (σ𝜎\sigmaitalic_σ, γ𝛾\gammaitalic_γ, and κ𝜅\kappaitalic_κ) of the generated speeches of Megs-TTS are closer to the ground-truth audio and the DE is lower than other methods, demonstrating the effectiveness of the proposed prosody interpolation techniques. Besides, we observe that our method can efficiently preserve the original timbre and maintain a high audio quality. We also visualize the prosody distribution before and after the prosody transfer process and compare the baseline system with Mega-TTS 2 in Figure 4.
B
The extreme sensitivity of single-photon sensors has made them an attractive technology for autonomous navigation [8], and accurate depth acquisition from mobile phones [2].
Our work brings NeRF to a new dimension of imaging at transient timescales, offering new opportunities for view synthesis and 3D reconstruction from multiview lidar.
We develop a novel time-resolved volumetric image formation model for single-photon lidar and introduce transient neural radiance fields for lidar view synthesis and 3D reconstruction.
We use the dataset to demonstrate new capabilities in transient view synthesis and state-of-the-art results on 3D reconstruction and appearance modeling from few (2–5) single-photon lidar scans of a scene.
Our approach differs significantly from all the previous work in that we investigate, for the first time, the problem of lidar view synthesis and multi-view 3D reconstruction in the single-photon lidar regime.
D
TABLE X: Performance of introducing additional supervisory signals from WavLM’s Transformer layers on the IEMOCAP, MELD, and CREMA-D datasets. Target Layers indicates the Transformer layers that provide supervisory signals to Vesper in hierarchical self-supervision
We are interested in assessing the performance of introducing additional supervision signals from WavLM to guide the learning of other layers in Vesper. Therefore, we use the 18th Transformer layer output from WavLM to supervise the learning of Vesper-4’s 3rd layer or Vesper-12’s 9th layer. Moreover, we employ the 6th layer output from WavLM to guide the learning of Vesper-4’s 1st layer or Vesper-12’s 3rd layer. Note that the proposed supervision for the intermediate and final layers is consistently applied. The recognition results on three corpora are reported in Table X. Unexpectedly, the addition of extra supervision signals does not result in a stable improvement. On the contrary, it compromises the model’s performance in most cases, particularly for Vesper-12. We hypothesize that the performance degradation may stem from the introduction of excessive supervision signals, which increases the difficulty of model training. Additionally, an abundance of supervision signals may constrain the model’s flexibility, causing Vesper to closely resemble WavLM and consequently leading to a reduction in performance. Therefore, supervising the intermediate and final layers of Vesper is sufficient to achieve the optimal performance.
Given that Vesper and WavLM possess the same model architecture, it is worth investigating the possibility of directly initializing Vesper with WavLM’s parameters. As shown in Fig. 3(b), the CNN encoder in Vesper is directly taken from WavLM. Suppose the numbers of Transformer layers employed in Vesper and WavLM Large are N𝑁Nitalic_N and M𝑀Mitalic_M, respectively. In this paper, N𝑁Nitalic_N is much smaller than M𝑀Mitalic_M for the purpose of compression. We attempt to uniformly extract the Transformer layers from WavLM Large to initialize the Transformer layers in Vesper. In particular, the i𝑖iitalic_i-th Transformer layer in Vesper is initialized by the parameter of the (1+⌊MN⌋×(i−1))1𝑀𝑁𝑖1(1+\lfloor\frac{M}{N}\rfloor\times(i-1))( 1 + ⌊ divide start_ARG italic_M end_ARG start_ARG italic_N end_ARG ⌋ × ( italic_i - 1 ) )-th Transformer layer in WavLM Large, where i∈[1,N]𝑖1𝑁i\in[1,N]italic_i ∈ [ 1 , italic_N ]; ⌊⋅⌋⋅\lfloor\cdot\rfloor⌊ ⋅ ⌋ rounds numbers down to the nearest integer. In addition to uniform extraction, we try to initialize the Transformer layers in Vesper by uniformly averaging the parameters across the Transformer layers in WavLM Large, which combines the representational capabilities of each layer like ensemble learning and model fusion.
Benefitting from the adoption of cross-layer self-supervision, the final output representation of Vesper contains both semantic information from the deep layers and acoustic information from the shallow layers. Hence, Vesper is expected to yield comparable performance when feeding only the representation of the last layer to the downstream classifier.
The results are presented in Table XI. On the IEMOCAP dataset, notable performance degradation is observed when only the representation from the last layer of WavLM is used. Specifically, there is a decrease of 10.2% in WA for WavLM Base and 2.0% in WA for WavLM Large. In contrast, Vesper using only the last layer representation displays only a minor decrease in performance (-0.2%∼similar-to\sim∼-0.3% in WA and -0.1%∼similar-to\sim∼-0.4% in WF1), and in the UA metric, it even exhibits improvement (+0.6% for Vesper-4 and +0.4% for Vesper-12). On the MELD dataset, using only the last layer representation from WavLM Base exhibits a severe decrease in performance (-1.8% in WA, -5.8% in UA, and -8.7% in WF1). WavLM Large also exhibits a decrease of -2.2% in the primary metric WF1. Remarkably, utilizing the last layer representation from Vesper-4 yields an improvement across all metrics, with a notable enhancement of +1.1% in WF1. For Vesper-12 on the MELD dataset, the performance decrease caused by merely using the last layer representation is kept within the range of -0.2% to -0.9%. On the CREMA-D dataset, employing only the last layer representation of WavLM Base leads to a decrease of 3.5%∼similar-to\sim∼3.6% in all metrics. Similarly, using only the last layer representation of WavLM Large results in a decrease of 6.2%∼similar-to\sim∼6.6% in all evaluation metrics. For Vesper-4, solely using the last layer representation leads to a decrease of 1.4%∼similar-to\sim∼1.9% in all metrics, where the performance degradation is mitigated. When considering Vesper-12, the performance with the last layer representation increases by 0.2%∼similar-to\sim∼0.4%. Given the experimental results, we conclude that the last layer representation of Vesper is informative enough to perform speech emotion recognition. This characteristic simplifies the utilization of the pretrained model, as it is no longer necessary to extract representations from each layer individually. Instead, relying solely on the representation from the last layer of Vesper proves to be adequate and sufficient.
A
In this paper, we introduce Spiking-UNet, a deep SNN for image processing, specifically designed for image segmentation and image denoising tasks. To achieve an efficient Spiking-UNet, we need to address the challenges of high-fidelity information propagation and the development of an effective training strategy. To overcome these challenges, we propose multi-threshold spiking neurons to enhance high-fidelity information transmission within the network. Furthermore, we utilize a conversion and fine-tuning pipeline that leverages pre-trained U-Net models, which ensures the effective training of our Spiking-UNet. We address inconsistent spiking rates caused by the significant variability in data distribution of skip connections through the application of a connection-wise normalization method during the conversion process. Additionally, we introduce a training method based on the spiking flow, which enables fine-tuning of the converted models while reducing the number of time steps required for inference. Experimental results demonstrate that our Spiking-UNet not only achieves comparable performance to the non-spiking U-Net model but also outperforms existing SNN methods for image segmentation and denoising tasks. Notably, our approach significantly reduces inference time by approximately 90% compared to the Spiking-UNet model without our fine-tuning.
In this paper, we introduce Spiking-UNet, a deep SNN for image processing, specifically designed for image segmentation and image denoising tasks. To achieve an efficient Spiking-UNet, we need to address the challenges of high-fidelity information propagation and the development of an effective training strategy. To overcome these challenges, we propose multi-threshold spiking neurons to enhance high-fidelity information transmission within the network. Furthermore, we utilize a conversion and fine-tuning pipeline that leverages pre-trained U-Net models, which ensures the effective training of our Spiking-UNet. We address inconsistent spiking rates caused by the significant variability in data distribution of skip connections through the application of a connection-wise normalization method during the conversion process. Additionally, we introduce a training method based on the spiking flow, which enables fine-tuning of the converted models while reducing the number of time steps required for inference. Experimental results demonstrate that our Spiking-UNet not only achieves comparable performance to the non-spiking U-Net model but also outperforms existing SNN methods for image segmentation and denoising tasks. Notably, our approach significantly reduces inference time by approximately 90% compared to the Spiking-UNet model without our fine-tuning.
Our research still has several limitations. As a preliminary exploration, we evaluate our Spiking-UNet on traditional and relatively small datasets for quick evaluation. In the future, Spiking-UNet will be tested on more newer and larger datasets. In addition, we only utilize Spiking-UNet on two image processing tasks. We will extend the application of Spiking-UNet, such as image super-resolution. Furthermore, we will explore the deployment of Spiking-UNet on neuromorphic chips to validate its effectiveness in real world.
In this paper, we propose the Spiking-UNet, an efficient integration of SNNs and the U-Net architecture for pixel-wise tasks, specifically image segmentation and denoising. To address the challenge of information propagation using spikes, we introduce multi-threshold spiking neurons that fire spikes at different thresholds, enhancing performance in a short time window. This mechanism promotes accurate spike propagation to subsequent layers, ensuring effective information flow. For effective training Spiking-UNet, we construct our model by converting a pre-trained U-Net model and subsequently fine-tuning it. During the ANN-SNN conversion, we observe that the data distribution from different parts of skip connections has significant variability, leading to inconsistent firing rates in Spiking-UNet. To overcome this, we propose a connection-wise normalization strategy, which equalizes the firing rates across skip connections, thereby ensuring more consistent and effective information transmission. In terms of fine-tuning, the traditional Back Propagation Through Time (BPTT) approach commonly used in training SNNs is computationally demanding. To mitigate this, we revise a training method that utilizes an accumulated spiking flow approach to more efficiently update the weights of the converted Spiking-UNet. To validate the effectiveness of our proposed Spiking-UNet, we conduct image segmentation experiments on the DRIVE staal2004ridge , EM segmentation cardona2010integrated , and CamSeq01 datasets fauqueur2007assisted , as well as image denoising on the BSD68, and CBSD68 datasets martin2001database . Experimental results demonstrate that our Spiking-UNet not only exceeds the existing SNN methods but achieves comparable performance to the corresponding U-Net.
We extensively evaluate the performance of Spiking-UNet on image segmentation and denoising tasks using multiple datasets. Our experiments demonstrate that Spiking-UNet not only outperforms existing SNN methods but also achieves performance comparable to the traditional U-Net model .
B
In this study, we provide a proof of concept in Geant4 by simulating two radiographic phantom images. We are able to calculate atomic number estimates which are consistent with ground truth, even using noisy input images and shielded materials. The authors acknowledge that this result is not a final proof that this method will work in a real system. We outline a clear procedure for porting this approach to commercial systems, and recommend that future work should focus on experimentally testing this method in deployed radiographic imaging scenarios.
This work was supported by the Department of Energy Computational Science Graduate Fellowship (DOE CSGF) under grant DE-SC0020347. The authors would like to acknowledge Cristian Dinca at Rapiscan Systems for his useful suggestions and feedback. The authors declare no conflict of interest.
The U.S. scans all high risk containers (identified as approximately 5 percent of seaborne containers CBO2016 ) using non-intrusive inspection (NII) technology NII . These radiography systems measure the attenuation of X-rays and/or gamma rays which are directed through the container to produce an attenuation image of the scanned cargo. Some radiography systems deploy dual energy photon beams, enabling classification of objects according to their Z𝑍Zitalic_Z, since the attenuation of photons depends on the atomic number of the intervening material. This technology improves the capabilities of these systems to identify nuclear threats or high-Z𝑍Zitalic_Z shielding.
The two pass methodology of Section 5.4 was then applied independently to 1000 noisy shielded phantom images. Figs. 1(c) and 1(d) show the first pass predicted atomic number and uncertainty, respectively. We observe that during the first pass, the presence of the steel shielding significantly suppresses the ability to identify materials by their Z𝑍Zitalic_Z. Next, using the output of the first pass to approximate λshieldsubscript𝜆shield\lambda_{\text{shield}}italic_λ start_POSTSUBSCRIPT shield end_POSTSUBSCRIPT and Zshieldsubscript𝑍shieldZ_{\text{shield}}italic_Z start_POSTSUBSCRIPT shield end_POSTSUBSCRIPT, we perform a second pass, mathematically stripping the steel. Figs. 1(e) and 1(f) show the predicted atomic number and uncertainty output of the second pass, and these results are quantified in table 2. Using this two pass approach, we are able to obtain atomic number estimates which are consistent with the ground truth Z𝑍Zitalic_Z of the unshielded objects, despite the thick shielding present in the images. Even with 25.4⁢cm25.4cm25.4\text{cm}25.4 cm of steel, we are able to classify graphite as low-Z𝑍Zitalic_Z, and lead and plutonium as high-Z𝑍Zitalic_Z. This offers a potential avenue for dual energy cargo inspection systems to identify shielded high-Z𝑍Zitalic_Z materials.
In this study, we provide a proof of concept in Geant4 by simulating two radiographic phantom images. We are able to calculate atomic number estimates which are consistent with ground truth, even using noisy input images and shielded materials. The authors acknowledge that this result is not a final proof that this method will work in a real system. We outline a clear procedure for porting this approach to commercial systems, and recommend that future work should focus on experimentally testing this method in deployed radiographic imaging scenarios.
A
In this setting, the OMP-FBP results in a computational complexity of the order 𝒪⁢(K2⁢log⁡K+K2⁢P2+K⁢P3+K3)𝒪superscript𝐾2𝐾superscript𝐾2superscript𝑃2𝐾superscript𝑃3superscript𝐾3{\mathcal{O}(K^{2}\log K+K^{2}P^{2}+KP^{3}+K^{3})}caligraphic_O ( italic_K start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT roman_log italic_K + italic_K start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_P start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_K italic_P start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT + italic_K start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ) compared to 𝒪⁢(K2⁢log⁡K+K2⁢P2+K⁢P3+K2)𝒪superscript𝐾2𝐾superscript𝐾2superscript𝑃2𝐾superscript𝑃3superscript𝐾2{\mathcal{O}(K^{2}\log K+K^{2}P^{2}+KP^{3}+K^{2})}caligraphic_O ( italic_K start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT roman_log italic_K + italic_K start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_P start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_K italic_P start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT + italic_K start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) for the OMP-NFFT approach. For a fixed number of P𝑃Pitalic_P iterations in the OMP algorithm, we obtain 𝒪⁢(K3)𝒪superscript𝐾3\mathcal{O}(K^{3})caligraphic_O ( italic_K start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ) as a dominating term for OMP-FBP compared to 𝒪⁢(K2⁢log⁡K)𝒪superscript𝐾2𝐾{\mathcal{O}(K^{2}\log K)}caligraphic_O ( italic_K start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT roman_log italic_K ) for OMP-NFFT, which emphasizes the reduction of computational costs by nearly one order of magnitude from an asymptotic perspective.
The first four steps in Algorithm 2 and Algorithm 3 coincide and include the univariate FFT algorithm, the OMP algorithm given in Algorithm 1 as well as a sparse matrix multiplication.
7:     Estimate p𝜽⁢[k]subscript𝑝𝜽delimited-[]𝑘p_{\boldsymbol{\theta}}[k]italic_p start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT [ italic_k ] by anti-difference.
Finally, note that both algorithms include further computational steps as, for example, the forward difference and anti-difference operator as well as the DDP.
As the discrete Fourier transform and the forward difference operator are non-commutative, we now formulate and prove a relation between a signal and its forward differences in the Fourier domain.
C
Z. Wang and K. Ma, “Active fine-tuning from gMAD examples improves blind image quality assessment,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 9, pp. 4577–4590, 2021.
K. Ma, Z. Duanmu, Z. Wang, Q. Wu, W. Liu, H. Yong, H. Li, and L. Zhang, “Group maximum differentiation competition: Model comparison with few samples,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 4, pp. 851–864, 2020.
K. Ma, Z. Duanmu, Z. Wang, Q. Wu, W. Liu, H. Yong, H. Li, and L. Zhang, “Group maximum differentiation competition: Model comparison with few samples,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 4, pp. 851–864, 2020.
K. Ma, Z. Duanmu, Z. Wang, Q. Wu, W. Liu, H. Yong, H. Li, and L. Zhang, “Group maximum differentiation competition: Model comparison with few samples,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 4, pp. 851–864, 2020.
K. Ma, Z. Duanmu, Z. Wang, Q. Wu, W. Liu, H. Yong, H. Li, and L. Zhang, “Group maximum differentiation competition: Model comparison with few samples,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 4, pp. 851–864, 2020.
A
We plot the performance improvement of BigFWI compared to the InversionNet on each dataset in Figure 2. The quantitative results are provided in Supplementary Table S2 and S3. We observe that BigFWI shows a clear improvement for all the datasets except for datasets FVA and FVB, which are comprised of flat layers only. One potential reason for the model’s degraded performance on FVA and FVB is that the network focuses more on curved layers which exist in most of the other datasets, and thus has a negative impact on the prediction of flat layers. We also observe that BigFWI exhibits significant improvement in MAE and RMSE for A datasets compared to B datasets across all families. However, the comparison of SSIM demonstrates the opposite trend, with the B datasets exhibiting better SSIM improvements compared to the A datasets in the same family. This variation in performance could be attributed to the greater complexity of the B datasets. The discrepancies in the baseline structures may not impact statistical misfits such as MAE and RMSE, but they may influence the SSIM. The simpler A datasets tend to benefit slightly more from the larger data volume than the more intricate B datasets.
The generalization ability of our models to Marmousi and Overthrust are depicted in Figure 8. We also provide the results of Reverse Time Migration (RTM) and the differences of RTM compared to the ground truth in Supplement Figure S2 and S3, respectively. Generally, BigFWI yield more accurate inversion results compared to InversionNet. For the smoothed version of Marmousi, the results of BigFWI match the ground truth better in the shallow region. The BigFWI-M even generates some layered structures in the top-right corner. In the deep region, the results of the InversionNet models contain either too many false high-velocity predictions or a horizontal layer with relatively low velocity. In contrast, though the velocity in the results of BigFWI is lower than the ground truth, they capture the locations of high-velocity regions. For the original version of Marmousi, it is obvious that the performance of BigFWI is better than InversionNet. We observe the layered structures given by BigFWI, and we think this is learned from CVA and CVB.
Figure 5 shows the ground truth and predictions of velocity maps InversionNet, BigFWI-B, BigFWI-M, and BigFWI-L. Though the performance of InversionNet has improved statistically when trained on larger datasets, errors in prediction such as extra bottom layer anomalies (FVA), inaccurate layer values (CVA, CVB), and inaccurate structures (FVB, FFB, CFA, CFB) still exist. In contrast, BigFWI generally offers enhanced accuracy in layer location and velocity values. Comparing the performance of the BigFWI models, BigFWI-L and BigFWI-M outperforms BigFWI-B in many aspects. For instance, the flat interfaces in FVA and FVB are more flat and sharp in the results of BigFWI-L and BigFWI-M than the ones of BigFWI-B. BigFWI-M also predicts more accurate fault slopes in FFA and FFB. A similar observation can be obtained from the Style Family results, in which BigFWI-L and BigFWI-M predict more accurate kinematic information than InversionNet and BigFWI-B. Though InversionNet predicts more high-frequency components, the scatters are inaccurate in shape, which introduces even larger data misfit.
Figure 3 shows a comparison of velocities maps between ground truth, InversionNet, and BigFWI. We observe that InversionNet predicts the velocity maps with various errors, such as extra bottom layer anomalies (FVA), inaccurate layer values (CVA, CVB), and inaccurate structures (FVB, FFB, CFA, CFB). BigFWI models generally yield better performance in predicting the structure and values of the velocity maps than InversionNet. We see that the improved results of BigFWI are due to the knowledge learned from the
Figure 7 compares the generalization results of different methods to the ground truth. We observe that InversionNet produces inaccurate layer structures for out-of-distribution (OOD) data. In FFA, FFB, CVA, FFA, and SA, InversionNet’s generalization outputs have errors of blurred borders, wrong layer positions, and inaccurate velocity values, especially in deeper parts. Moreover, the results clearly have incorrect patterns from other datasets in more complex datasets (i.e., CVB, FFB, CFA and CFB). Meanwhile, these explain why we could find higher SSIM improvement in these four datasets in Fig. 6. Conversely, our BigFWI benefits from its large-scale cross-domain training set and can effectively capture more essential features of different datasets. Thus, BigFWI has more accurate predictions on OOD data than InversionNet.
C
A general randomized RNN consists of an untrained hidden layer with recurrent units, which non-linearly projects the input data into a high-dimensional feature space, and a trained output layer which scales and combines the outputs of the hidden layer in a linear fashion.
Among studies exploring the theoretical explanations behind the success of RC in time-series problems, one of the first is [37], which introduces a functional space approximation framework for a better understanding of the operation of ESNs.
Within the class of randomized RNNs, we consider a single reservoir ESN containing M𝑀Mitalic_M neurons with random and sparse interconnections (among other possibilities) and a single output (readout) weights matrix. This structure is depicted in Fig. 1.
In this work, we have introduced a clear signal processing approach to understand the echo state network (ESN), a powerful architecture of the Reservoir Computing (RC) family, belonging to the broader class of randomized recurrent neural networks.
Reservoir Computing (RC) [19] is a specific paradigm within the class of randomized RNN approaches where the echo state network (ESN) [20] is a popular implementation of the general RC framework.
D
APMGSRN is conceptually an adaptive version of fVSRN that splits the single highly-parameterized feature grid into many less-parameterized feature grids, where each feature grid now has the ability to adjust its transformation within the domain to focus on high-error regions.
We compare our model against two state-of-the-art scene representation networks, neural graphics primitives (NGP) [20] and fVSRN [30], for data reconstruction quality and training times at fixed model sizes.
Training times across models are similar, with our model being the least performant compared to NGP and fVSRN.
A visual comparison of our model and two state-of-the-art models compared with in this paper, fVSRN and NGP, is provided in Figure 2.
For fair comparison between the models, we develop a Python/PyTorch[21]-based neural volume renderer (included with our code on GitHub) that supports our own APMGSRN as well as state-of-the-art models fVSRN [30] and neural graphics primitives (NGP) using a hash-grid encoding [20], on top of raw data volume rendering.
C
Suppose a mission analyst is tasked with designing the preliminary trajectory for a deflecting spacecraft through kinetic impact. Gravity-assists, as demonstrated by Vasile and Colombo [23] and Negri [28, Chp. 4], can increase the chance of success by inexpensively boosting the orbital energy of the spacecraft and producing more favorable conditions for colliding the spacecraft with the asteroid. For this case, predicting the required deflection is crucial in solving the multiple gravity-assist problem to find trajectories leading to the desired velocity change, as exemplified by Negri [28, Chp. 4].
For now, the importance of this study is in documenting and explaining the source of those discrepancies. The provisional guideline offers a practical means to assess the appropriateness of the best simplified methods currently available for the preliminary design phase of a real case scenario. While it is conceivable to already devise a “complex approximation” to estimate the perturbation added by the shallow encounter with the planet, such an approach would inevitably involve obtaining the entire trajectory of the deflected asteroid, examining its displacement from the original trajectory, considering its minimal change in geometry during the encounter, and assessing the net effects of the perturbing body on the deflected asteroid. However, this “complex approximation” would not be practically useful in optimization routines during the preliminary design phase of a trajectory for a deflecting spacecraft, when analytical and simplified models are preferable. It would be much easier and practical to go straight for an accurate numerical solution in this case.
To make deflection missions possible, the prediction of deflection plays a crucial role in trajectory analysis and planning. As is the case in any deep space mission, trajectory design is crucial for finding a cost-effective solution within time constraints (which is even more important for a deflecting scenario). In a preliminary phase of the design process, the common approach is to resort to analytical approximations within the optimization routines to explore a larger design space promptly, which will indicate the most interesting regions for later refinement in more elaborate simulations [10]. In the context of asteroid deflection, the prediction of deflection is crucial for finding the trajectory that will allow the spacecraft to arrive for deflection in conditions that will produce the necessary deflection within safe margins.
In the preliminary design of gravity-assist trajectories, analytical approximations are often employed to explore a wide design space within a reasonable time frame. Similarly, for a deflecting spacecraft, a simple analytical prediction of deflection, such as the one presented in Section 4.2, would be preferable.
In this section, we present both methods that are likely to be applied in the preliminary design phase of the trajectory of a deflecting spacecraft. The approach presented in Section 4.1 is more suitable for low-thrust deflections, while the one presented in Section 4.2 will likely be the choice for impulsive approaches.
C
Φ⁢(x)=ℙ⁢[S=1∣x]Φ𝑥ℙdelimited-[]𝑆conditional1𝑥\displaystyle\Phi(x)=\mathbb{P}[S=1\mid x]roman_Φ ( italic_x ) = blackboard_P [ italic_S = 1 ∣ italic_x ]
participate in electricity markets where demand flexibility can be traded. The method obtains accurate predictions as it can be used with a controller which takes into account the actual state of the loads, as in this paper. Nevertheless, any other control system could be utilized, provided that load disturbances are taken into consideration. The
In the Monte Carlo simulation stage of the MC&ESB method, the time evolution of the VB is simulated for each candidate value of power x𝑥xitalic_x. An estimate of the probability p𝑝pitalic_p is given by Equation 11, where N𝑁Nitalic_N is obtained
Φ⁢(x)Φ𝑥\Phi(x)roman_Φ ( italic_x ) is a likelihood function and is given by the conditional probability
The monotonicity property of the power supply probability function Φ⁢(x)Φ𝑥\Phi(x)roman_Φ ( italic_x ) can be used to estimate the maximum power (positive or negative) that a given VB could supply to an aggregator or other market actor with a probabilistic guarantee measure. This estimate is a prediction of the demand flexibility that can be provided by a TCL aggregate that is modeled as a VB.
D
}}\|_{F}^{2}+\frac{1}{2}\|t_{i}^{\text{syn}}-t_{i}^{\text{pred}}\|_{1}\right]caligraphic_L = italic_λ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT [ divide start_ARG 1 end_ARG start_ARG 9 end_ARG ∥ italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT syn end_POSTSUPERSCRIPT - italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT pred end_POSTSUPERSCRIPT ∥ start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + divide start_ARG 1 end_ARG start_ARG 2 end_ARG ∥ italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT syn end_POSTSUPERSCRIPT - italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT pred end_POSTSUPERSCRIPT ∥ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ]
To address these challenges, we propose a self-supervised deep learning framework called HetACUMN based on amortized inference. By alternating the variational image reconstruction task and the conditional pose prediction task, the VAE-based architecture explicitly enforces the disentanglement of the conformation and pose latent space. The experiments on simulated datasets show that HetACUMN outerformed other armotized-inference-base methods like cryoFIRE. On the other hand, our method has comparable accuracy of pose estimation and even better performance in estimating conformational distribution than non-amortized methods. Furthermore, we demonstrated that HetACUMN can also be used on experimental datasets.
We argue that this is not an inherent drawback of amortized inference. HetACUMN is a better alternative when the data and/or computational resource is limited, which performed well on both small and large datasets.
Therefore, the encoder can learn from a comprehensive pose training datasets even if the input EM image dataset is small or has highly biased
The conditional pose prediction task takes the encoder and decoder from the mulit-class image reconstruction task and reversed order.
C
For example, the CTC primarily concentrated on label-free images, thereby excluding stained images such as multiplexed immunofluorescent images. Similarly, the DSB challenge emphasized nucleus segmentation in fluorescent and histology images while disregarding phase-contrast and differential interference contrast images. The segmentation task in the CoNIC challenge is also limited to nucleus segmentation in H&E stained images.
Different from existing challenges that focused on specific microscopy image types, this initiative represents the first instance where cell segmentation algorithms were challenged to efficiently handle a broad spectrum of microscopy images with one single model and generalize to new images without manual intervention.
Consequently, the algorithms developed through these competitions are often tailored to handle only specific types of microscopy images, limiting their generalizability.
Biomedical image data science competitions have emerged as an effective way to accelerate the development of cutting-edge algorithms. Several successful competitions have been specifically organized for microscopy image analysis, such as the cell tracking challenge (CTC) [43, 33], the Data Science Bowl (DSB) challenge [3], and Colon Nuclei Identification and Counting Challenge (CoNIC) [14]. These competitions have played a crucial role in expediting the adoption of modern machine learning and deep learning algorithms in biomedical image analysis. However, it is worth noting that these challenges have primarily focused on a limited subset of microscopy image types.
To promote the widespread applicability of the new SOTA algorithms, all top-performing teams have made their algorithms publicly available on GitHub, complete with comprehensive preprocessing, training, and testing code. However, a critical challenge remains in bridging the gap between these advanced algorithms and their seamless integration into daily biological practice, as it often demands a basic level of computational expertise to apply these algorithms to new images successfully.
B
In this section, we conduct experimental validations of the proposed model architecture and compare it with other models in the field. We evaluate the performance of our proposed model through both qualitative and quantitative analyses. Furthermore, comprehensive ablation experiments on the model structure are performed to ascertain its effectiveness.
While LadleNet and LadleNet+ exhibit favorable results in qualitative comparisons, this does not imply that the generated image quality is optimal. There is still room for improvement in terms of image clarity and edge details in the images produced by LadleNet and LadleNet+. Qualitative analysis alone struggles to objectively describe the distinctions between the outputs of different models. To conduct a fairer comparison of image quality, we randomly selected 100 color-thermal pairs from the test set for quantitative experiments. We utilized four metrics, including AG (Artifacts Grade), MSE (Mean Squared Error), VIF (Visual Information Fidelity), and CC (Correlation Coefficient), to evaluate the discrepancy in quality between the VI images obtained after the thermal-to-visible translation by various models and the ground truth VI images. The results of the measurements for the 100 VI images produced by different models using these four metrics are presented in Figure 7.
Images from the dataset are transformed into tensors and resized to 300x400 dimensions, followed by central cropping to 192x256 dimensions. Both TIR and VI images are treated as 3-channel RGB images. Both LadleNet and LadleNet+ are trained for 120 epochs with a batch size of 40 samples. The initial learning rate is set to 0.01. Whenever the loss value does not decrease for 2 consecutive epochs, the learning rate is reduced by a factor of 0.1, with no more learning rate changes occurring for the subsequent 5 epochs. Both models use the Adam optimizer, with all parameters set to default values, and a⁢m⁢s⁢g⁢r⁢a⁢d𝑎𝑚𝑠𝑔𝑟𝑎𝑑amsgraditalic_a italic_m italic_s italic_g italic_r italic_a italic_d is set to True. All training and testing procedures are performed on an NVIDIA A40 GPU and an AMD EPYC 7543 CPU. The average duration to train one epoch for LadleNet is 8 minutes, while for LadleNet+ it is 10 minutes. The DeepLabV3+ model used in LadleNet+ employs ResNet101[net_10] as its backbone network and is pretrained on the Cityscapes dataset[net_3], a street scene dataset.
The KAIST dataset[ex_1] is a multispectral road dataset containing 95,328 color-thermal pairs. It covers road scenes in campus, street, and rural environments, and provides coarse time periods (daytime and nighttime) as well as fine time periods (sunrise, morning, afternoon, sunset, night, and dawn). For training and validation of our model, we select Set 01 from the training set, comprising 8,035 pairs of images, and Set 07 from the test set, comprising 8,141 pairs of images, forming our experimental dataset with a total of 16,176 color-thermal pairs. Among these, 80% are used for training and 20% for testing, exclusively focusing on daytime scenes. To ensure fair performance comparison among different models, we train existing TIR-to-VI image translation models on the training set and evaluate their performance on the test set.
We conducted comparisons between our proposed LadleNet and LadleNet+ models and existing methods for TIR-to-VI image translation, as well as some foundational image generation benchmark models. The compared methods include TIR2Lab[ex_2], U-net, U-net_IR2VI[ex_3], Pix2Pix, and Pix2Pix_IR2VI[ex_4]. While code for some of these methods might not be publicly available, we followed the descriptions in the respective papers to replicate the models111The replication code for U-net_IR2VI and Pix2Pix_IR2VI will be made available at https://github.com/Ach-1914/LadleNet/tree/main/Model/. To ensure fair quantitative comparisons among different models, we trained all models on the same training set and evaluated their performance on the same test set. We adopted four metrics to measure the image discrepancies, including Structural Similarity Index (SSIM), Multiscale Structural Similarity Index (MS-SSIM), L1 metric, and Peak Signal-to-Noise Ratio (PSNR). These metrics effectively quantify the disparities between the generated VI images and the ground truth VI images, and they are commonly used metrics in the field of image translation. The comparative results of various models on the test set are presented in Table 1.
C
In principle, with more iterations we could further clean the dataset and yield models that perform better.
In general, since we are using a model on the same data we trained it upon, the model could estimate the wrong separation as it might have memorized the wrong stem during the training procedure.
First of all, we highlight the impact that the errors in the data have on the performance of the model: training on SDXDB23_LabelNoise degrades the average separation quality by 1.42dB, while training on SDXDB23_Bleeding degrades it by 0.83dB.
Therefore, we assume that a source separation model trained on noisy data is good enough at approximating the oracle method and train HTDemucs on SDXDB23_LabelNoise using loss truncation from the beginning (i.e., the model being trained and the one approximating the oracle are the same).
Table 4: Results of our iterative refinement baseline. We use a source separation algorithm trained on corrupted data to improve the dataset: training the same model on the improved data increases the separation quality.
A
In order to gain more insight into the benefit of additional data, we show in Figure 4 the performance of the winning submissions on both leaderboards in comparison to the cocktail-fork baseline. Please note that there is only a single clip for movie ”000” and, hence, the box plot collapses to a horizontal line. Notably, the most significant disparities between the models trained on DnR and the winning entry in Leaderboard B are observed in animation movies (”002”, ”006”) and action movies (”003”, ”008”).
After the conclusion of the challenge, we contacted the top three teams in each leaderboard and invited them to contribute to this manuscript with a description of their approaches. In the following, the teams accepting our invitation present their submissions and discuss them. For the team subatomicseer, which ranked 3rd in Leaderboard A, we refer the interested reader to Fabbro et al., (2023) where the team explains their approach in detail.
The CDX track saw a dynamic evolution in terms of both the number of submissions and the SDR performance. The competition attracted a total of 19 teams for Leaderboard A and 10 teams for Leaderboard B, with 369 and 179 submissions respectively. Tables 1 and 2 present the final rankings for both leaderboards. The team aim-less emerged as the winner of Leaderboard A, achieving an average SDR of 4.3454.3454.3454.345 dB. On the other hand, Leaderboard B was topped by JusperLee, with an impressive SDR of 8.1818.1818.1818.181 dB. It is noteworthy that while all top five teams in Leaderboard A were from academic institutions, the highest scores in Leaderboard B were obtained by two commercial entities. This diversity of participants underscores the broad interest and applicability of the challenge across both academic and industry sectors. Figure 2 shows the progress that the teams could achieve during the course of the competition. We can observe that there was a continuous improvement of the SDR for each source and, especially at the end of the competition, there is a steady improvement visible as participants tuned their submissions.
To investigate whether this improvement resulted from participants overfitting to the visible portion of the test set, Figure 3 presents the difference between the hidden SDR (the SDR for all clips of CDXDB23 hidden from the participants) and the visible SDR (the SDR for all clips of CDXDB23 shown to the participants). Comparing two subsequent submissions where the newer one is worse in this difference than the previous one indicates that a participant is obtaining less improvement/more degradation on the hidden SDR than for the visible SDR hinting at a possible overfitting to the displayed global SDR. Hence, seeing “trajectories” of consecutive submissions in Figure 3 with negative slopes can be used to detect overfitting. Intriguingly, some degree of overfitting is apparent for the submissions to Leaderboard B towards the end of the challenge but less overfitting is observed for submissions to Leaderboard A. For example, looking at the results for the teams JusperLee and Audioshake, we can see that there is a negative trend in their submissions towards the end of the challenge. Especially for team Audioshake, this is visible as the models extracting sound effects and music seem to be tuned in the last week of the challenge period. Consequently, to reduce the potential effect of overfitting, participants needed to select three submissions at the end of the challenge which were then evaluated on the full CDXDB23 as discussed in Section 2.4.
Hence, in addition to the music demixing (MDX) track (Fabbro et al.,, 2023), which was already present in the Music Demixing Challenge 2021 (MDX’21) (Mitsufuji et al.,, 2022), we have added a new cinematic demixing (CDX) track to the Sound Demixing Challenge 2023 (SDX’23) in order to foster research in this direction. The challenge was facilitated through AIcrowd22endnote: 2https://www.aicrowd.com/challenges/sound-demixing-challenge-2023, and participants were invited to submit their systems to one of two leaderboards, depending on whether they used only DnR or additional training data. To rank the submissions, we developed a new hidden test set, called CDXDB23, derived from real movies. Through the establishment of this challenge framework, we observed substantial performance enhancements. Specifically, the top-performing system, trained solely on DnR, demonstrated an improvement of 1.8 dB compared to the cocktail-fork baseline based on MRX (Petermann et al.,, 2022). Remarkably, the highest-performing system on the open leaderboard, which allowed the use of any data for training, exhibited a significant improvement of 5.7 dB. These results underscore the efficacy of our challenge in driving advancements in the field of cinematic audio separation.
A
We further visualized the spectrograms of the speech waveforms enhanced by these two ablation models and our proposed MP-SENet as illustrated in Fig 5.
To investigate the effects of phase optimization approaches, we conducted ablation studies on the phase spectrum loss (denoted as “w/o𝑤𝑜w/oitalic_w / italic_o Pha. loss”) and complex spectrum loss (denoted as “w/o𝑤𝑜w/oitalic_w / italic_o Com. loss”), which explicitly and implicitly optimized the phase, respectively.
It can be clearly observed that after ablating the phase loss, the harmonic structures were significantly distorted, while this impact was slighter when ablating the complex spectrum loss.
Additionally, we utilize magnitude loss, complex spectral loss, and STFT consistency loss to train the MP-SENet model effectively.
For both complex spectral masking and mapping, the phase information was implicitly restored when optimizing the complex spectrum.
B
Many techniques consider pure pixel assumption for simplicity, while in real applications, pure pixels of some endmembers are often missing. The methods that rely on pure pixels for endmember extraction can be divided into three main groups: projections and extremes, simplex fitting methods, and multiple endmember extraction methods (endmember bundles).
Simplex Fitting: The endmembers are assumed to be located at the vertices of the simplex enclosing the data points. Therefore, they can be extracted by maximizing the data simplex. N-FINDR [57] searches for pure pixels that form the largest simplex by gradually inflating a simplex inside the data. Simplex volume maximization (SiVM) [23] extracts the endmembers by iteratively maximizing the simplex volume using
Projections and Extremes: This group often searches for extremes by iteratively projecting data points. The vertices can be selected as the extreme points after iteratively projecting the data in a particular direction. For instance, Pixel Purity Index (PPI) [55] scores the spectral vectors by projecting them onto a large set of random vectors (called skewers) and counting the number of times that each vector is an extreme point. Orthogonal subspace projection (OSP) [56] and Vertex Component Analysis (VCA) [26] selects endmembers iteratively by projecting the data into an orthogonal direction to the subspace spanned by the already selected endmember.
where, 0≤q≤10𝑞10\leq q\leq 10 ≤ italic_q ≤ 1, 𝐦𝐦{\bf m}bold_m contains the mean values of the spectral pixels, i.e., 𝐦=1n⁢𝐘𝟏n𝐦1𝑛subscript𝐘𝟏𝑛{\bf m}=\frac{1}{n}{\bf Y}{\bf 1}_{n}bold_m = divide start_ARG 1 end_ARG start_ARG italic_n end_ARG bold_Y1 start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT, and 𝐚(i)Tsubscriptsuperscript𝐚𝑇𝑖{\bf a}^{T}_{(i)}bold_a start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( italic_i ) end_POSTSUBSCRIPT is the i𝑖iitalic_ith row of the matrix 𝐀𝐀{\bf A}bold_A. This term pulls the endmembers toward the center of mass. CoNMF uses both spatial and MV (geometrical) regularizers and solves the problem by projecting the data into a subspace. In [86], Robust CoMNF (RCoNMF) was proposed, which utilizes a geometrical penalty that minimizes the distances between the endmembers to be estimated and the boundary pixels (𝐏∈ℝp×r𝐏superscriptℝ𝑝𝑟{\bf P}\in\mathbb{R}^{p\times r}bold_P ∈ blackboard_R start_POSTSUPERSCRIPT italic_p × italic_r end_POSTSUPERSCRIPT). The main assumption is that the endmembers are close to the extremes of the data simplex (so-called boundary pixels). Hence, RCoNMF solves
Hyperspectral data often live in subspace having dimension much lower than the dimension of spectral bands defined by the sensor. Assuming r𝑟ritalic_r endmembers in the scene, the intrinsic/subspace dimension is r−1𝑟1r-1italic_r - 1, i.e., the data points can be represented by r−1𝑟1r-1italic_r - 1 linearly independent vectors or bases (in the case of orthogonal projections). Therefore, identifying such a subspace and projecting the data into that reduces the computational cost, memory consumption, and removes the noise and outliers.
B
0,&\text{otherwise},\end{cases}italic_u start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_r start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , italic_α start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) , italic_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) = { start_ROW start_CELL italic_q , end_CELL start_CELL if italic_r start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ≤ italic_α start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) and italic_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ≤ over¯ start_ARG italic_x end_ARG , end_CELL end_ROW start_ROW start_CELL italic_q , end_CELL start_CELL if italic_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ≤ under¯ start_ARG italic_x end_ARG , end_CELL end_ROW start_ROW start_CELL 0 , end_CELL start_CELL otherwise , end_CELL end_ROW
The main contribution of this paper is to propose a tractable stochastic co-design method for simultaneously optimizing the selection of the storage tank size and control parameters for WDSs. We consider an aggregated WDS that captures the main features of WDSs. Water demands and electricity prices are stochastic. To handle these stochastic characteristics, we use Markov chain theory [24, 22] to analyze the evolution of the volume of water in the storage tank, which depends on both the size of the tank and the control policy. Furthermore, the control policy from the co-design solution can also be applied to existing WDS to improve operational performance. We provide three examples and a real case study in South Australia to illustrate and demonstrate the proposed method.
During the summer period in 2019, the operation using the optimized price thresholds for the given tank size (referred as co-design operation) resulted in a 13% decrease in pumping cost relative to trigger-level operations, while a 34% decrease is observed during the winter months. The reason for the larger savings in the winter months is that there are more opportunities to shift pumping from high price periods to low price periods when the demand is low. Overall, the co-design solutions saved 18% in pumping costs for the year 2019. The considered control policy based on the price threshold is hence effective and able to bring economic benefit to the operation of the existing water infrastructure.
As the infrastructure lifetime is set to 50 years, the operating cost is found by using the summer and winter parameters for 25 years each. For each tank size, the total cost is found by adding the capital cost and the operating cost. The results are reported in Table IV. As also shown in Fig. 15, it can be seen that a smaller tank size may save on capital costs but leads to higher operating and penalty costs. When the tank is too small, the risk of having less water in the tank than the minimum allowed increases. A larger tank provides more flexibility in storing water and meeting demands during high-priced times, but the savings in operating costs may not compensate for the increase in capital costs.
The aggregated WDS has both capital and operating costs. The objective for the co-design problem is to minimize these costs by simultaneously designing the tank size V𝑉Vitalic_V and the control policy while considering a long-term planning horizon N>0𝑁0N>0italic_N > 0 (the number of discrete-time steps). The overall co-design cost is given by
D
I⁢[X;Z]=𝔼p⁢(X,Z)⁢[log⁡p⁢(X,Z)p⁢(X)⁢p⁢(Z)].=𝐼𝑋𝑍subscript𝔼𝑝𝑋𝑍delimited-[]𝑝𝑋𝑍𝑝𝑋𝑝𝑍I[X;Z]=\mathbb{E}_{p(X,Z)}\left[\log\frac{p(X,Z)}{p(X)p(Z)}\right].italic_I [ italic_X ; italic_Z ] = blackboard_E start_POSTSUBSCRIPT italic_p ( italic_X , italic_Z ) end_POSTSUBSCRIPT [ roman_log divide start_ARG italic_p ( italic_X , italic_Z ) end_ARG start_ARG italic_p ( italic_X ) italic_p ( italic_Z ) end_ARG ] .
distance—gives a mutual information of 0.5420.5420.5420.542 bits, compared to the predictive coder’s mutual
The auto-encoder has a mutual information of 0.2270.2270.2270.227 bits while the predictive coder has a mutual
information of 0.2270.2270.2270.227 bits, which indicates the temporal dependencies encoded by the predictive
information of 0.6270.6270.6270.627 bits and the auto-encoder’s mutual information of 0.2270.2270.2270.227 bits
B
Different deep learning techniques have also been explored to estimate camera poses [29], predict depth maps [30], and improve feature tracking [31]. These methods have demonstrated the potential to enhance VIO algorithm performance and robustness.
The IMU measurement model formulates the relation between the measured raw values and the real values while considering the noises and biases. Commonly, the underlying assumption is that the inertial sensor measurements have a constant variance, which is often not the case. Here, we incorporate our DualProNet regression to adaptively estimate the current variance of the inertial measurements.
A common practice for optimization-based, visual-inertial SLAM algorithms is to use a pre-integration model [40] for the IMU measurements. This approach allows for integrating only the specific force and angular velocities between consecutive frames, regardless of the initial conditions of the position and velocity of the previous frames. This eliminates the need to repropagate IMU measurements after the starting conditions change, saving computational resources.
DualProNet – A Deep Neural Network for Noise Covariance Estimation: The deep neural network is responsible for estimating the inertial sensors noise covariance matrix based on sensor measurements. It takes as input the IMU sensor data, including specific force and angular velocity measurements, and outputs the current noise covariance. The network is trained using a dataset of sensor measurements and corresponding ground truth noise covariance values. DualProNet refers to our network architecture because of the different characteristics of accelerometers and gyroscopes.
Other research focuses on improving the inertial performance using learning based methods. InertialNet [32] inputs camera and IMU measurements to a CNN model that estimates the camera motion. RNIN-VIO [33] uses a long short-term memory network (LSTM) [34] to estimate the current position using IMU measurements and previous states from visual-inertial fusion. In [35] the authors adaptively estimate IMU bias for factor graph problems using LSTM and Transformers [36]. OriNet [37] authors utilized a deep learning framework for a 3D orientation estimation using a single IMU.
D
Finally, the estimated mask M𝑀Mitalic_M is multiplied element-wise with the magnitude spectrogram F𝐹Fitalic_F to obtain an enhanced magnitude spectrogram. Phase information P𝑃Pitalic_P is combined with this enhanced magnitude spectrogram to reconstruct the output audio i⁢S⁢T⁢F⁢T⁢(M⊙F,P)𝑖𝑆𝑇𝐹𝑇direct-product𝑀𝐹𝑃iSTFT(M\odot F,P)italic_i italic_S italic_T italic_F italic_T ( italic_M ⊙ italic_F , italic_P ). We evaluate audio quality using the SI-SNR [13] loss function, which compares it with the clean utterance of the target speaker.
ratio (SI-SNR) [13] as a loss function because it is a speech enhancement evaluation metric and a training target that makes optimizing and choosing the best model more precise. (3) Different from [8, 9], we focus on improving WER like [10, 11] but joint tuning ASR and speaker extraction model rather than optimizing the loss function. (4) We make a pre-trained self-supervised model based on wav2vec2 [14] architecture that works better for the noise acoustic condition. (5) For the mask estimation model, we use Conformer block [15] rather than LSTM and CNN in [10, 11]. Furthermore, we introduce a cross-extraction mechanism between the reference signal and noisy signal for speaker embedding, which enhances the performance of our model, as demonstrated in our experiments.
Both training processes (self-supervised wav2vec2 and ConVoiFilter model) use this data pipeline. One difference is that when we train self-supervised wav2vec2, we do not use other speakers’ utterances (the dashed arrow in figure 2 - means no cross-talk) because it makes the data too noisy and can fail the wav2vec2 model.
Self-supervised learning of speech representations [17] has recently shown its effectiveness in utilizing unlabeled speech data, resulting in outperforming the state-of-the-art (SoTA) in many automatic speech recognition (ASR) datasets. For our study, we utilized the pre-trained wav2vec2 model [14] to construct our ASR model. The wav2vec2 model acts as a speech encoder, and for the decoder, we used an RNN transducer [18]. Despite having a speech enhancement module to eliminate noise from the audio, the output may still contain noise. To address this issue, we utilized the self-supervised learning capabilities of wav2vec2 and created a pre-trained model by incorporating noise and room reverb into the unlabeled data (see section 3.1 for dataset details). Our subsequent experiment demonstrated that this approach significantly enhances the system’s accuracy.
We evaluated our system using four different model settings to assess the WER on various types of data (table 1). The first two settings consisted of ASR models only, which aimed to measure the ASR model’s ability to handle noisy data. The first ASR model, named ASR_based, was initialized from the pre-trained wav2vec2 base model [14], which was trained with 960 hours of Librispeech data. The second ASR model, ASR_noisy, was initialized from our pre-trained wav2vec2 base model, which was trained with the same 960 hours of data, but augmented with noise and reverb data. The remaining two settings incorporated a speech enhancement module. The third was a cascade model, in which ConVoiFilter and ASR were trained independently. The final model was end-to-end, where ConVoiFilter and ASR were jointly trained. The first two ASR models were trained with noisy audio (without cross-talk). In contrast, the cascade and end-to-end models were trained with noisy audio that may have cross-talk.
C
The objective of the image processing module is to prepare the pseudo-images of EEG data for compatibility with pre-trained vision transformer models. This transformation is accomplished through the following operations:
Normalization: Following resizing, normalization techniques are applied to standardize the pixel values of the input images to a predefined scale, typically ranging from 0 to 1 or -1 to 1. This normalization step stabilizes the training process and enhances the convergence of vision transformer models during subsequent fine-tuning.
The objective of the image processing module is to prepare the pseudo-images of EEG data for compatibility with pre-trained vision transformer models. This transformation is accomplished through the following operations:
After being processed by the image processor module, the preprocessed images are primed for direct integration into cutting-edge vision transformer models to extract features. In this paper, we select three popular vision transformer architectures to serve as feature extractors for the pseudo-images generated by AdaCT-I: ViT [8], Swin Transformer [9], and DeiT  [10].
Resizing and Standardization: The image processor firstly resizes the input images to a fixed size, ensuring uniformity and compatibility with the requirements of the vision transformer models. Additionally, color channels are standardized, and pixel values are normalized to promote uniformity across different images.
D
With the advent of emerging technologies like massive machine-type communication and Internet of Things in recent years, the wireless traffic has been growing at a tremendous rate. Specifically, the growth is expected to be more than five times between 2019201920192019 and 2028202820282028 [1] with the data intensive applications witnessing approximately 1000100010001000 times growth. Hence, in such times of ever increasing data traffic, the overall network lifetime gets significantly affected due to limited battery constraints, especially in scenarios, where a large number of devices are deployed over a geographical region. Thereby, charging or powering these devices becomes a costly and critical concern. As a result, self-sustainable and low-powered next generation wireless communication networks are gaining importance as well as relevance in both academia and industry. In this context, the fact that radio frequency (RF) signals can also convey energy apart from information, the concept of wireless power transfer (WPT) and, in particular, of simultaneous wireless information and power transfer (SWIPT) is considered as a very promising and enabling technology [2].
In this paper, we investigated the effects of conventional communication-based chaotic waveforms in SWIPT. We considered a SIMO set-up with a single antenna transmitter and a multi-antenna receiver, where the transmitter employs a DCSK-based signal generator. Specifically, depending on the requirement, each receiver antenna can be utilized in either of the IT or EH modes. By taking into account a generalized frequency selective fading and the nonlinearities of the EH process, we characterized the proposed architecture in terms of the BER and harvested DC. We showed that both these metrics are dependent on the parameters of the transmitted waveform and also on the number of the receiver antennas being utilized in the IT and EH mode, respectively. Moreover, we also investigated the BER-energy trade-off to propose different waveform designs corresponding to SWIPT and sole WPT and information transfer, respectively. Numerical results show that the proposed architecture is effective in combining the benefits of chaotic waveform-based signal design and SWIPT. An immediate extension of this work is to investigate the proposed architecture performance in terms of transmit signal design, where we aim to improve on the data rate, but without compromising on the BER and WPT performance. Moreover, other scenarios can also be considered including one with relays that harvest power and then transmit.
In this paper, we investigated the effects of conventional communication-based chaotic waveforms in SWIPT. We considered a SIMO set-up with a single antenna transmitter and a multi-antenna receiver, where the transmitter employs a DCSK-based signal generator. Specifically, depending on the requirement, each receiver antenna can be utilized in either of the IT or EH modes. By taking into account a generalized frequency selective fading and the nonlinearities of the EH process, we characterized the proposed architecture in terms of the BER and harvested DC. We showed that both these metrics are dependent on the parameters of the transmitted waveform and also on the number of the receiver antennas being utilized in the IT and EH mode, respectively. Moreover, we also investigated the BER-energy trade-off to propose different waveform designs corresponding to SWIPT and sole WPT and information transfer, respectively. Numerical results show that the proposed architecture is effective in combining the benefits of chaotic waveform-based signal design and SWIPT. An immediate extension of this work is to investigate the proposed architecture performance in terms of transmit signal design, where we aim to improve on the data rate, but without compromising on the BER and WPT performance. Moreover, other scenarios can also be considered including one with relays that harvest power and then transmit.
The key idea of SWIPT is to extract both information and energy from the received RF signal. This is achieved by employing a rectifying antenna (rectenna) at the receiver, which converts the received RF signals to direct current (DC). Unlike conventional energy sources, where the available power for harvesting, in itself, is erratic in nature [3], energy harvesting (EH) with SWIPT is a dedicated, controllable, continuous, and on-demand process. This joint extraction of information and energy is done by separating the information decoding and EH operations in space, in time, or in power [4]. The work in [5] explores SWIPT systems for multiple-input multiple-output broadcasting channel, where both separated and co-located EH and information decoding (ID) receivers are considered. The authors in [6] investigate the capacities of SWIPT systems with separate ID and multiple EH receivers. In this context, the aspect of accurate mathematical modelling of the EH circuit at the receiver plays a very important role. Some works propose simplified linear [7], piece-wise linear [8, 9] and tractable logistic nonlinear model [10] of the EH circuit that originates from the saturation of the output power beyond a certain RF input power due to diode breakdown. The logistic model is obtained by fitting measurements from practical RF-based EH circuits for a given excitation signal and is certainly an improved version of its oversimplified linear and piece-wise linear counterparts. The authors in [11, 12] characterize the power conversion efficiency of the EH circuit as a second order polynomial and a rational function of the average input power, respectively. However, all these models fail to characterize the actual working principle of the harvesting circuit. On the other hand, the work in [13] proposes a circuit-based realistic nonlinear EH model. This particular model not only relies on the EH circuit characteristics, but it also enables the design of waveforms that maximize the WPT efficiency.
In this section, we investigate the effect of SR-DCSK signals on the EH performance of the proposed receiver design, when K𝐾Kitalic_K (≤N)absent𝑁(\leq N)( ≤ italic_N ) antennas are considered for EH. Specifically, we investigate the impact of the reference length ϕitalic-ϕ\phiitalic_ϕ on the harvested DC in terms of the spreading factor β𝛽\betaitalic_β and the multipath fading wireless channel. By considering that the noise contribution to the harvested DC is negligible, we characterize the EH performance of the proposed receiver design by the following theorem.
C
Visual masking (Keysers & Perrett, 2002) would be alleviated because the data used were collected in many rapid series sequences.
These findings suggest that the initial 100 ms following stimulus onset contained limited information, possibly due to the hysteresis effect in the visual pathway (Sayal et al., 2020).
Besides, the signal after 600 ms brought a negative effect, probably due to the noise from other stimuli and cognitive processing, emerging a response increasing on the frontal lobe as in Fig. 2(A).
The phenomenon is consistent with the bottom-up hierarchy of visual system (DiCarlo & Cox, 2007), that the visual stimulus is processed sequentially by the V1, V2, V4 on the occipital cortex, and inferotemporal (IT) on the temporal cortex along the ventral stream for object recognition (Bao et al., 2020).
A clear response could be observed on the temporal cortex 100-600 ms after the onset, although the 200 ms stimulus onset asynchrony (SOA) still caused periodic responses on the occipital cortex.
D
The simulation is performed using Matlab 2023b with YALMIP 2021 [40] and MOSEK solver on a personal computer with 2.9-GHz, 8-core Intel i7-10700 processor and 16 GB of RAM.
To demonstrate the obtained results, let us consider an example of a 50505050-vertex networked control system depicted in Figure 3. The 50505050-vertex graph is an Erdős–Rényi random undirected connected graph where an edge is included to connect two vertices with a probability of 0.5.
In the Monte-Carlo simulations, we examine Erdős–Rényi random undirected connected graphs G⁢(N,q)𝐺𝑁𝑞G(N,q)italic_G ( italic_N , italic_q ) where N𝑁Nitalic_N is the number of vertices and an edge is included to connect two vertices with probability q=0.5𝑞0.5q=0.5italic_q = 0.5 [39].
Consider the networked control system (4) associated with an undirected connected graph 𝒢𝒢{\mathcal{G}}caligraphic_G where the system has the stealthy data injection attack (3) at the input of an arbitrary attack vertex a𝑎aitalic_a and outputs (6) at monitor vertices mk∈ℳsubscript𝑚𝑘ℳm_{k}\in{\mathcal{M}}italic_m start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ∈ caligraphic_M.
First, we begin with finding all the dominating sets of the considered 50505050-vertex graph (see Figure 3).
A
In general, we use four benchmark models for comparison: two quality-oriented and two value-oriented forecast models. The two quality-oriented forecast models are trained under MSE and pinball loss (asymmetric loss function), and abbreviated as Qua-E and Qua-Q respectively. To this end, the Qua-E provides predictions of the expected wind power, whereas the Qua-Q provides quantile predictions. We use the value-oriented forecasting approach proposed by [29] as a benchmark and abbreviate it as Val-L, as it requires the forecast model to be linear. At the training phase, this approach integrates parameter estimation with sequential decision-making, which is solved by commercial solvers. We set the value-oriented forecasting model trained via OptNet [17] as another benchmark and abbreviate it as Val-O. At the training phase, the gradient of the overall operation cost w.r.t. the forecast is derived via differentiable optimization.
We use root mean square error (RMSE) and average operating cost to measure the quality and value of forecasts respectively, both of which are negatively oriented.
In general, we use four benchmark models for comparison: two quality-oriented and two value-oriented forecast models. The two quality-oriented forecast models are trained under MSE and pinball loss (asymmetric loss function), and abbreviated as Qua-E and Qua-Q respectively. To this end, the Qua-E provides predictions of the expected wind power, whereas the Qua-Q provides quantile predictions. We use the value-oriented forecasting approach proposed by [29] as a benchmark and abbreviate it as Val-L, as it requires the forecast model to be linear. At the training phase, this approach integrates parameter estimation with sequential decision-making, which is solved by commercial solvers. We set the value-oriented forecasting model trained via OptNet [17] as another benchmark and abbreviate it as Val-O. At the training phase, the gradient of the overall operation cost w.r.t. the forecast is derived via differentiable optimization.
Fig. 4 displays the 4-day wind power forecast profiles of the value- and quality-oriented forecasting approaches. The real-time problem has a clear influence on value-oriented forecasting. Due to the higher opportunity loss for energy deficit than energy surplus, the proposed approach and the quality-oriented one issuing quantile tend to forecast less wind power production than the quality-oriented one issuing expectation, to avoid the less profitable situation of underproduction, such that the energy deficit is less likely to happen. This point can be further demonstrated by the last two columns of Table II, which show that on average, the value-oriented forecasting approach has lower real-time operation cost than the quality-oriented ones. Since the proposed approach tends to forecast less wind power (which has zero marginal cost in the day-ahead problem), the proposed approach has a larger day-ahead operation cost. However, thanks to co-minimizing the day-ahead and the real-time costs at the training phase, the value-oriented forecasting approach achieves lower operation costs for the overall operation.
In this analysis, the capacity of wind power is scaled to 40 kW, whose capacity is 57% of that of the maximum demand. In the test set, we use RMSE and average operation cost, calculated by (14) (which is the summation of the average day-ahead operation cost and average real-time operation cost), as the evaluation metrics for quality and value, respectively. Here, we consider two quality-oriented forecasting approaches, i.e., Qua-E and Qua-Q. The nominal level of the quantile, which is an input to the pinball loss, is set as 2929\frac{2}{9}divide start_ARG 2 end_ARG start_ARG 9 end_ARG.
A
This section presents a novel technique for efficiently locating a object amidst a high level of clutter using the variational Bayes framework. Different from tracking scenarios where the previous time step’s tracking result can provide an informative prior of the object’s position, the localisation strategy discussed here does not require such a strong informative prior. As a result, this technique can be useful for relocating objects once they lose track, or for initialising a tracking algorithm where the object positions are hardly known. Moreover, it has the potential to be developed into a strategy for estimating the number of objects. This section places an emphasis on clarifying the technique’s rationale and thus only considers the localisation of a single object. We will extend the relocation technique to handle multiple missed objects in Section VI, and integrate it into the complete VB-AbNHPP tracking algorithm.
Finally, we note from Fig. 7 and 8 that the proposed VB-AbNHPP-RELO is the only tested method whose OSPA decreases after the first 10101010 time steps. This is again due to the significant advantage of our effective relocation method over other existing methods. On the contrary, the OSPAs of all other methods either grow or remain the same even when the coalescence is less severe, meaning that missed objects are seldom retrieved and more objects may be lost due to the high clutter. In particular, we may find that in such a heavy clutter tracking scene, the birth process in PMBM-B is unable to retrieve the missed objects as effectively as the proposed relocation strategy. This may be because: 1) a distinct mismatch between the birth process and our model assumptions, and 2) the birth process cannot cover the whole surveillance area due to the significant computational time.
Corresponding to the multi-object tracking problem under the association-based NHPP measurement model, the target distribution from time step 00 to N𝑁Nitalic_N can be factorised as follows
We assume that the object to be localised follows the NHPP model, and that both the Poisson rates ΛΛ\Lambdaroman_Λ and the measurement covariance are known to us (e.g., have been estimated in advance using the proposed method).
Our strategy is designed for challenging scenarios where the clutter number in the survey area can be hundreds of times greater than the object’s measurement number. We aim to efficiently locate the object only using measurements received at a single time step, and the object can be anywhere in the survey area. Currently, our strategy can handle object Poisson rates as low as 3. In more challenging scenarios where the object Poisson rate is lower, the localisation may be achieved by using measurements from multiple time steps, and this case will be discussed in future.
C
Cygnus A images obtained by the different imaging methods are displayed in log10subscriptlog10\textrm{log}_{10}log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT scale in Figs. 1–4. Reconstructions are overlaid with additional panels consisting of (a) the associated residual dirty images displayed in linear scale to visually assess the fidelity to back-projected data, and zooms on selected regions of the radio galaxy, all displayed in log10subscriptlog10\textrm{log}_{10}log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT scale: (b) the inner core, (c) the West hotspots, and (d) the East hotspots. The overall visual inspection of Cygnus A reconstructions shows that R2D2 variants exhibit higher resolution than CLEAN variants, while generally corroborating the achieved depictions by AIRI and uSARA. They provide deep reconstructions, whose pixel values span nearly five orders of magnitude, which is in line with the target dynamic range estimate. A close-up inspection indicates that both R3D3 (Fig. 3, bottom) and AIRI (Fig. 4, bottom) stand out, owing to their high levels of detail and their limited amount of patterns that could be construed as artifacts. R2D2 (Fig. 2, bottom) and R2D2-Net (Fig. 3, top) seem to lack details in the faint extended emission. uSARA (Fig. 4, top) depicts spurious ringing and wavelet-like patterns. As expected, Hö-CLEAN (Fig. 1, top) delivers a poor reconstruction with severely limited dynamic range, due to its inherent approximate data model. Both CS-CLEAN (Fig. 1, bottom) and MS-CLEAN (Fig. 2, top) provide much improved reconstructions, with the former exhibiting grid-like artifacts due to its inadequate sparsity (identity) basis for the complex target radio source.
Examination of the West and East lobes of Cygnus A highlights the ability of R2D2 variants to provide a more physical depiction of their filamentary structure than the benchmark algorithms. On the one hand, CLEAN variants deliver a smooth reconstruction. On the other hand, uSARA, and to a much lesser extent AIRI, exhibit ringing artifacts in the West lobe (pointed at with a green arrows in Fig. 4). These artifacts are likely induced by pointing errors at the hotspots, resulting in the over-fitting of the high-spatial frequency content of the data by both uSARA and AIRI. Joint DDE calibration and imaging (using either AIRI or uSARA as the imaging module) can drastically reduce (if not remove) these artifacts (see their corresponding reconstructions provided in Dabbech et al., 2024). These findings suggest that R2D2 variants may be less prone to calibration errors than AIRI and uSARA.
The inner core consists of the point-like active galactic nucleus (AGN) of Cygnus A, from which two jets emanate (panels (b) of all figures). The reconstructions of R2D2 variants, uSARA, and AIRI show a super-resolved depiction of the region. In particular, R2D2 variants exhibit continuous emission between the AGN and both jets. uSARA exhibits wavelet-like artifacts around the AGN. CLEAN variants provide unresolved depiction of the source due to the restoring beam.
As recovered by R2D2 variants, AIRI and uSARA, the hotspots highlight the ability of these algorithms to resolve physical structure beyond instrumental resolution, in contrast with CLEAN variants (see panels (c) and (d) of all figures). Interestingly, where AIRI and uSARA exhibit artificial zero-valued pixels around the hotspots, all R2D2 variants depict continuous emission. This observation suggests the ability of R2D2 variants to achieve a more physical reconstruction.
Cygnus A images obtained by the different imaging methods are displayed in log10subscriptlog10\textrm{log}_{10}log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT scale in Figs. 1–4. Reconstructions are overlaid with additional panels consisting of (a) the associated residual dirty images displayed in linear scale to visually assess the fidelity to back-projected data, and zooms on selected regions of the radio galaxy, all displayed in log10subscriptlog10\textrm{log}_{10}log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT scale: (b) the inner core, (c) the West hotspots, and (d) the East hotspots. The overall visual inspection of Cygnus A reconstructions shows that R2D2 variants exhibit higher resolution than CLEAN variants, while generally corroborating the achieved depictions by AIRI and uSARA. They provide deep reconstructions, whose pixel values span nearly five orders of magnitude, which is in line with the target dynamic range estimate. A close-up inspection indicates that both R3D3 (Fig. 3, bottom) and AIRI (Fig. 4, bottom) stand out, owing to their high levels of detail and their limited amount of patterns that could be construed as artifacts. R2D2 (Fig. 2, bottom) and R2D2-Net (Fig. 3, top) seem to lack details in the faint extended emission. uSARA (Fig. 4, top) depicts spurious ringing and wavelet-like patterns. As expected, Hö-CLEAN (Fig. 1, top) delivers a poor reconstruction with severely limited dynamic range, due to its inherent approximate data model. Both CS-CLEAN (Fig. 1, bottom) and MS-CLEAN (Fig. 2, top) provide much improved reconstructions, with the former exhibiting grid-like artifacts due to its inadequate sparsity (identity) basis for the complex target radio source.
B
The Vthresholdsubscript𝑉thresholdV_{\text{threshold}}italic_V start_POSTSUBSCRIPT threshold end_POSTSUBSCRIPT, Vresetsubscript𝑉resetV_{\text{reset}}italic_V start_POSTSUBSCRIPT reset end_POSTSUBSCRIPT and α𝛼\alphaitalic_α in Eq. 1 is set as 1, 0, 2 respectively. We use the Atan function as the surrogate gradient function. The number of spiking S4 layers is set as 4 and the hidden size H𝐻Hitalic_H is 256. The λ𝜆\lambdaitalic_λ in Eq. 8 is 0.001.
We compare spiking S4 and its ANN equivalent with the Intel DNS Challenge baseline neuromorphic model Sigma Delta Network [17] and two open-sourced performant ANN models Wave-U-Net [23] and FRCRN [2].
Table 2: Results on Voice-Bank+Demand dataset. The ANN and SNN-based models are separated by a horizontal line.
As shown in  Table 1, S4 and Spiking S4 are competitive in ANN-based and SNN-based groups, respectively. FRCRN is based on the Complex-Unet and recurrent structure and achieves the best performance among the ANN models but incurs high training and inference costs. S4 is the closest to FRCRN with a much lower computation cost. For the SNN groups, our spiking S4 is slightly inferior to its ANN equivalent but clearly outperforms the Sigma-Delta network in all the indicators.
Table 1: Results on DNS Challenge 2023 validation set and test set. The ANN and SNN-based models are separated by a horizontal line.
D
The stability issues brought by renewable energy sources with non-Gaussian uncertainties in isolated microgrids are also investigated in [13], where the stability chance constrained optimal power flow is formulated as a bi-level optimization with the lower level handling the stability index through semi-definite programming.
where 𝐃𝐃\mathbf{D}bold_D is the uncertain parameter distribution and 𝒫𝒫\mathcal{P}caligraphic_P the ambiguity set. In this work, we consider the ambiguity set based on the first- and second-order moments, which is widely used for the distributionally robust formulation in the literature [24]. However, most of the existing work considers linear constraint in optimization [24] or LTI system with linear decision rules or affine feedback policies in control design [25]. As for our concerned stability-constrained optimization problem, the stability index is nonlinear in terms of both decisions and uncertain variables. How to manage this type of problem in optimization has not been investigated to the best of the authors’ knowledge.
Having obtained the uncertainty information of the stability constraint coefficients, a distributionally robust stability-constrained UC problem can be formulated, where the overall system operation cost is minimized subjected to a number of constraints, such as power flow and power balance constraints, thermal unit constraints, and the distributionally robust system stability constraints.
Note that since this work focuses on the topic of stability-constrained optimization, the reviewed research is not limited to a specific problem such as optimal power flow or Unit Commitment (UC).
Note that the comparison against some other existing approaches is not straightforward or well-defined due to the following reasons. i) The concerned problem of the parameter uncertainty associated with the system dynamic model within the framework of stability-constrained optimization has not been discussed or dealt with in the literature. ii) The comparison with other approaches such as robust and chance-constrained optimization may be unnecessary, since the authors do not claim the formulation based on the DRO in the presented method is the best choice under all circumstances. Actually, applying which type of uncertainty management approach depends on the system operators’ knowledge of the uncertain parameter as discussed in Section III-A.
C
We evaulate the model performance with Dice similarity coefficients(DSC). The segmentation results of 132 brain regions and TICV/PFV from the plain UNesT and our model UNest extenstion is shown in Table 2, Figure 2 and Figure 3. We show that we can achieve accurate TICV/PFV segmentation, reflected in DSC scores of 0.962 and 0.954. When comparing the segmentation performance for the 132 brain regions, our achieved DSC score of 0.751 closely aligns with the plain UNesT’s performance level of 0.759.
We evaulate the model performance with Dice similarity coefficients(DSC). The segmentation results of 132 brain regions and TICV/PFV from the plain UNesT and our model UNest extenstion is shown in Table 2, Figure 2 and Figure 3. We show that we can achieve accurate TICV/PFV segmentation, reflected in DSC scores of 0.962 and 0.954. When comparing the segmentation performance for the 132 brain regions, our achieved DSC score of 0.751 closely aligns with the plain UNesT’s performance level of 0.759.
Table 2: DSC score for UNesT and UNesT with TICV/PFV estimation on 132 brain regions, TICV and PFV. Note: LCI and UCI represents the lower and upper bounds of the 95% confidence interval, respectively.
In this study, we enhance the UNesT framework by incorporating intracranial measurements. Specifically, we integrate TICV/PFV estimation by introducing two additional convolutional layers. These layers simultaneously process the TICV and PFV outputs alongside the other 132 brain regions. To address the data scarcity problem, we follow the practice in UNesT that pretraining the model with a large dataset with pseudo labels and finetune with human annotations. We use 45 T1w 3D volumes from Open Access Series Imaging Studies (OASIS) [27] to finetune the model, where both 133 whole brain classes and TICV/PFV labels are available. Our results show that we can achieve accurate TICV/PFV estimation while maintaining a comparable level of performance across 132 regions on whole brain segmentation.
Herein, we enhance the current hierarchical transformer UNesT for whole brain segmentation by integrating intracranial measurements. More precisely, we include TICV/PFV estimation by introducing an extra set of convolutional layers. These additional layers enable the estimation of TICV and PFV segmentation masks alongside the other 132 brain regions. Importantly, this is achieved while ensuring that the performance of these 132 brain regions remains at a comparable level. This work expands the potential usage of UNesT to various other downstream analyses.
B
To tackle this challenge, researchers have introduced deep learning algorithms, such as Startdist [3] and Cellos [2]. However, these deep learning-based approaches demand a substantial amount of annotated data for effective algorithm training. Moreover, their limited scope in handling various modalities hinders their generalizability. For each distinct type of microscopy image, re-training the models becomes necessary, posing practical limitations on their applicability.
In this study, we explore the potential of SegmentAnything [4], a foundation model trained on an extensive dataset of 11 million images encompassing diverse modalities, to automate individual organoid detection in microscopy images. Moreover, we have integrated comprehensive post-processing and analysis of morphological properties using the masks generated by SegmentAnything. The workflow is demonstrated in Fig. 1. Our main claim is that this proposed pipeline enables both automatic and accurate organoid detection, as well as fully automated organoid morphology analysis.
SegmentAnything and post processing. In our research, we utilized the Python API for SegmentAnything and evaluated three pretrained models [4], namely ViT-B, ViT-H, and ViT-L, ultimately selecting the ViT-H model for inference due to its consistent performance across various microscopy analyses. However, we encountered challenges with the SegmentAnything-generated masks, as is shown in FIg. 2, which required post-processing to achieve accurate cell identification.
The first issue we encountered was that SegmentAnything sometimes misidentified the background as an object, resulting in non-zero indices for the background in the masks. Secondly, the high resolution of whole microscopy images necessitated the use of cropped patches for model fitting. However, this approach introduced incomplete organoids along the edges of the patches, leading to erroneous analysis of morphological properties. To address these concerns, we implemented an automated process where we the boundaries of the image patches were examined, and all objects located in these regions were excluded. A third challenge was observed with organoids possessing a lumen structure, where the model inaccurately demarcated the regions into two separate objects. To rectify this problem, we computed the maximum boundary of each mask and unified all values within this boundary. Lastly, debris might be erroneously identified as objects (organoids in this scenario) by the model. Unfortunately, we have not yet found an automated method to remove them. Thus, we manually marked these non-organoid structures and deleted them, which, when compared to manually identifying all organoid structures, proved to be a relatively simpler task.
In this paper, we utilized the SegmentAnything model in automatic organoid structure identification in microscopy images. We claim that the SegmentAnything model showed promising performance, and our post-processing efforts were also necessary to enhance the accuracy of organoid structure detection and ensure reliable organoid morphology analysis. Overall, this research contributes to the field of organoid analysis in microscopy images by presenting an efficient approach for individual organoid detection and morphology analysis without any pre-requisites on data annotation. The automated pipeline offers promising avenues for accelerating and enhancing organoid features characterization and quantification, paving the way for further advancements in organoid research and related disciplines.
A
Let us now bound λmin⁢(𝐀)subscript𝜆𝐀\lambda_{\min}(\bm{A})italic_λ start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT ( bold_italic_A ) with Gershgorin’s circle
𝒖ksuperscript𝒖𝑘\bm{u}^{k}bold_italic_u start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT locally by applying the space-variant δ𝛿\deltaitalic_δ-stencil.
ρ⁢(𝐀)𝜌𝐀\rho(\bm{A})italic_ρ ( bold_italic_A ), as a function of λ1subscript𝜆1\lambda_{1}italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and λ2subscript𝜆2\lambda_{2}italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. This leaves the
As 𝐀𝐀\bm{A}bold_italic_A applies the δ𝛿\deltaitalic_δ-stencil, this theorem states that the smallest
𝑨≠𝟎𝑨0\bm{A}\neq\bm{0}bold_italic_A ≠ bold_0 such that for the spectral norm ρ⁢(𝑨)>0𝜌𝑨0\rho(\bm{A})>0italic_ρ ( bold_italic_A ) > 0 holds.
C
In practice, our findings motivate the use of regularization mechanisms that compensate for large autocorrelation of audio data, e.g., by adding adaptive noise.
This phenomenon is evidence of instabilities of a convnet at initialization. Our preliminary experiments showed that these instabilities are not compensated during training. Yet, further examination is needed to formulate a rigorous statement here.
Interestingly, fixing the convnet weights to form a filterbank on the mel–scale brings the PER to 18.3.%, and fine-tuning them by gradient descent, to 17.8%.
To be able to draw conclusions from the instabilities at initialization to instabilities during training, further investigations of the effects of gradient descent in this setting are necessary.111
With this perspective, we show that natural autocorrelation characteristics of audio signals trigger instabilities in ΦΦ\Phiroman_Φ with high probability. We also find that the bounds A,B𝐴𝐵A,Bitalic_A , italic_B are highly sensitive to the design of the random filterbank, i.e., the number and length of the filters.
C
Intuitively, the querying process has similarity to hypernetworks (David et al., 2016), which generates weights based on data itself to fully exploit the structure of the data.
The proposed layer can be easily incorporated into the transformer, giving a frequency-aware (FA) encoder that is both expressive and computationally efficient.
We replace the multi-head self-attention with our proposed multi-head frequency filter layer Freq−L⁡(⋅)FreqL⋅\operatorname{Freq-L}(\cdot)start_OPFUNCTION roman_Freq - roman_L end_OPFUNCTION ( ⋅ ) to mix the information across the sequence of tokens, which gives
To address those two questions, we propose a multi-head frequency filter layer to build a frequency-aware transformer encoder FA−Enc⁢(⋅)FAEnc⋅\operatorname{FA-Enc(\cdot)}roman_FA - roman_Enc ( ⋅ ).
Having successfully incorporated a fix-sized multi-head filter K𝐾Kitalic_K into the frequency space,
D
Magnetic resonance imaging (MRI) stands as a crucial non-ionizing medical imaging modality. However, a persistent challenge in MRI is the lengthy acquisition time. Strategies aimed at accelerating the speed of MRI acquisition while maintaining image fidelity involve omitting K-space lines in the phase-encoding direction, followed by reconstructing images from undersampled data. Notable methodologies addressing this challenge encompass compressed sensing (CS) [1] and parallel imaging (PI) [2, 3].
Furthermore, the undersampling scale of each MRI is not limited to a specific value in this recovery process, which means the implicit function fθsubscript𝑓𝜃f_{\theta}italic_f start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT only learns the mapping from coordinates and feature vectors to voxel intensities. Thus, a reconstruction of multiple scale factors can be theoretically achieved. To enable the ability to discriminate different reconstruction scales and generate a uniform scale-independent feature map, we also embed the scale factor into the encoder.
PI is a commonly used technique to accelerate MRI scans by utilizing the redundancy present in multiple receiving coil elements to periodically reduce phase encoding steps. However, PI is prone to noise and aliasing artifacts, particularly at high undersampling factors.
The reconstruction results of different methods on the knee dataset are depicted in Fig. 3. Similarly to the findings of the brain dataset experiments, GRAPPA, RAKI, and rRAKI continue to exhibit artifacts and noise. Notably, RAKI and rRAKI generate particularly blurry images with overly smoothed tissue boundaries. In contrast, our proposed method successfully reduces noise and artifacts evident in the results of the compared methods and restores images with high-definition detail and improved image contrast, as illustrated by the zoomed-in images in Fig. 3.
Fig. 2 visualizes the reconstruction effects of different methods on the three undersampling scales of the brain dataset. We can observe that these GRAPPA-reconstructed MR images are heavily noisy and suffer from obvious artifacts at higher reconstruction scales. In contrast, RAKI’s results have low noise levels and significantly improved SSIM and PSNR; however, the artifacts remain a serious unresolved problem. The performance of rRAKI, including the quality of the reconstructed images and the evaluation metrics, is between GRAPPA and RAKI. The reconstruction effect of the three methods decreases sharply with increasing undersampling scale. On the contrary, the proposed method is more robust, produces similar image quality at different undersampling scales without noise and obvious artifacts, and outperforms the compared methods.
B
In conclusion, the paper presents a useful masked Transformer method which expands the application of MAE to ECG time series. We interpret an ECG time series as a sequence of segments and process it by the lightweight Transformer.
Despite its simplicity, this method performs surprisingly well when adopting masked pre-training combined with proper training strategies. Therefore, the proposed algorithm outperforms recent state-of-the-art algorithms across multiple ECG classification datasets. In addition, the derived lightweight model offers deployment-friendly features, which is attractive in the clinical environment.
the full potential of masked modeling for ECG classification remains untapped. In this paper, we present a novel approach that significantly outperforms the state-of-the-art in this domain.
The derived lightweight model demonstrates the ability to classify a wide range of ECG diagnoses effectively, while remaining convenient for deployment in the clinical environment.
It is worth to mention the naive lightweight model are friendly to deployment in the clinical environment.
A
In speech attention, advancements comprise Zhang [16]’s deployment of deep networks for enhanced ASR. Dong’s 2D-Attention [17] sharpens Speech-Transformer’s focus, and Ramabhadran [18] added multiple softmaxes to amplify attention in Transformers. Yet, these methods overlook the variable length character of speech. We outline our specific enhancements in the subsequent section.
We believe that crafting adaptable models to address the variable length traits of speech is fundamentally essential to solving this issue. This insight is rooted in Echo Multi-Scale Attention (Echo-MSA), depicted in Fig.2. It uses dynamic attention for speech sequences of varying lengths, extracting speech features at different details and enhancing its modeling of variable-length speech features. Experiments show that Echo-MSA boosts the stability and accuracy of speech recognition.
In the Librispeech dataset, Echo-MSA is integrated into the backbone network. We conduct thorough experimental analyses to verify the effectiveness of Echo-MSA and the training process.
The training framework, depicted in Figure 1, includes Echo-Transformer blocks with four Echo-MSA attention mechanisms, detailed further in Figure 2. The DualFocusGate integrates Echo-MSA with standard MSA, allowing flexible switching between Echo-MSA and Self-Attention, enhancing speech data analysis by capturing statistical features.
As depicted in Fig.2, Echo-MSA processes data via a depth-separable convolutional layer, expanding the receptive field to capture global speech signal details. It uses Wϕsubscript𝑊italic-ϕW_{\phi}italic_W start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT for fine-grained extraction, where applying window Wϕsubscript𝑊italic-ϕW_{\phi}italic_W start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT limits full-attention computation to few neighboring tokens, reducing computational load. Echo-MSA also allows personalized learning by varying Wϕsubscript𝑊italic-ϕW_{\phi}italic_W start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT values in different Transformer stages, understanding interactions between frames, phonemes, and words. The complete Echo-MSA output is calculated by:
C
13.44%†superscriptpercent13.44†{13.44\%}^{\dagger}13.44 % start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT
}+1\right)\beta^{3}L}{\rho_{F}d_{3}+\left(Q\delta_{w}^{2}+1\right)\beta^{2}L},≈ divide start_ARG italic_ρ start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_d start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - italic_ρ start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT italic_d start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT + ( italic_Q italic_δ start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + 1 ) italic_β start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT italic_L end_ARG start_ARG italic_ρ start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT italic_d start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT + ( italic_Q italic_δ start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + 1 ) italic_β start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_L end_ARG ,
16.27%†superscriptpercent16.27†{16.27\%}^{\dagger}16.27 % start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT
We first look at the backward message computations. According to (16), the backward mean 𝐝¯q⁢T←←subscript¯𝐝𝑞𝑇\overleftarrow{{\bf{\bar{d}}}_{qT}}over← start_ARG over¯ start_ARG bold_d end_ARG start_POSTSUBSCRIPT italic_q italic_T end_POSTSUBSCRIPT end_ARG and backward variance v𝐝q⁢T←←subscript𝑣subscript𝐝𝑞𝑇\overleftarrow{{{{v}}_{{\bf{d}}_{qT}}}}over← start_ARG italic_v start_POSTSUBSCRIPT bold_d start_POSTSUBSCRIPT italic_q italic_T end_POSTSUBSCRIPT end_POSTSUBSCRIPT end_ARG can be computed as
16:     Obtain the 𝐱^dsubscript^𝐱𝑑{{\bf{\hat{x}}}_{d}}over^ start_ARG bold_x end_ARG start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT as in (42)
B
Table 2 demonstrates the experimental results of various models in the field of music question answering. These are categorised into three different scenarios: “MusicInstruct (Short)” which represents the short questions on MI datasets, “MusicInstruct (Long)” which refers to the long subjective questions on the MI dataset, and “MusicQA” which denotes the test set of the MusicQA dataset generated from the tags of MTG-jamendo datasetsBogdanov et al. (2019). The table presents performance metrics for four key evaluation criteria: B-U (Bleu-Uni), M-R (METEOR-Rouge), R-L (ROUGE-L), and BERT-S (BERT-Score).
In this section, we introduce the experimental setup as well as present an evaluation of our model’s performance on the Question-Answering of music on the MusicQA and MI datasets. Besides, we evaluate the performance of music captioning on the MusicCaps dataset. We compare our results to state-of-the-art models and discuss the unique challenges posed by this dataset. Last, we carry out an ablation study on training on different parts of the MI dataset.
“MusiLingo / MusicQA” represent the model fine-tuned with Q&A pairs on the finetune set) of the MusicQA dataset, generated from the MagnaTagaTune (MTT) dataset Law et al. (2009). Our experiments on the MusicQA dataset demonstrate competitive performance, aligning with the state-of-the-art (SOTA) results provided by MU-LLaMA. Specifically, our model achieves comparable performance on M-R and R-L metrics and surpasses the SOTA methods on BU and BERT-S, confirming its effectiveness in addressing the challenges posed by the Music question-answering task.
Besides, “MusiLingo / MI Short + MusicQA” is finetuned on the short-question partition on the MI dataset and then is finetuned on the MusicQA dataset. The results are particularly excellent in the B-U and BERT-S metrics and have no significant difference in M-R and R-L compared to the SOTA approach.
Table 2 demonstrates the experimental results of various models in the field of music question answering. These are categorised into three different scenarios: “MusicInstruct (Short)” which represents the short questions on MI datasets, “MusicInstruct (Long)” which refers to the long subjective questions on the MI dataset, and “MusicQA” which denotes the test set of the MusicQA dataset generated from the tags of MTG-jamendo datasetsBogdanov et al. (2019). The table presents performance metrics for four key evaluation criteria: B-U (Bleu-Uni), M-R (METEOR-Rouge), R-L (ROUGE-L), and BERT-S (BERT-Score).
B
Hence, the reception of weak echo signals by the low-sensitivity ISAC receivers results in unsatisfactory target detection/parameter estimation performance. Active RIS has become a prospective solution for ISAC systems to address the above issues and enhance both radar echo signal quality and communication performance by situationally manipulating the wireless propagations and amplifying the signals. There are several studies intended to explore the application of active RIS in ISAC systems. The authors in [39] propose to utilize an active RIS to improve the achievable communication secrecy rate while taking into account the worst radar detection SNR. Moreover, an active RIS-aided ISAC system in the scenario of cloud radio access network (C-RAN) is investigated in [40]. Our recent work [41] employs active RIS to overcome the blockage issue by introducing an additional virtual LoS link between the base station (BS) and the target. Both transmit/receive and reflection beamformings are jointly designed to maximize the radar SNR while guaranteeing pre-defined SINRs for communication users.
Hence, the reception of weak echo signals by the low-sensitivity ISAC receivers results in unsatisfactory target detection/parameter estimation performance. Active RIS has become a prospective solution for ISAC systems to address the above issues and enhance both radar echo signal quality and communication performance by situationally manipulating the wireless propagations and amplifying the signals. There are several studies intended to explore the application of active RIS in ISAC systems. The authors in [39] propose to utilize an active RIS to improve the achievable communication secrecy rate while taking into account the worst radar detection SNR. Moreover, an active RIS-aided ISAC system in the scenario of cloud radio access network (C-RAN) is investigated in [40]. Our recent work [41] employs active RIS to overcome the blockage issue by introducing an additional virtual LoS link between the base station (BS) and the target. Both transmit/receive and reflection beamformings are jointly designed to maximize the radar SNR while guaranteeing pre-defined SINRs for communication users.
While existing works on active RIS-empowered ISAC systems focus on target detection function, target parameter estimation is also an important task in radar sensing and should be further explored.
Motivated by the aforementioned discussions, we investigate the deployment of active RIS in ISAC systems in this paper, with an emphasis on the parameter estimation function for the radar sensing component. Specifically, we consider an ISAC system where BS communicates with multiple users and simultaneously senses a point target blocked by an obstacle. An active RIS is employed to support both communication and sensing functions. Our goal is to jointly design the BS transmit precoding and the active RIS reflection beamforming to optimize the direct-of-arrival (DoA) estimation performance and satisfy the users’ quality of service (QoS) demands and the power limitations at the BS and the active RIS. The main novelties and contributions of this paper are summarized as follows.
Firstly, we introduce active RIS in ISAC systems to enhance the radar parameter estimation performance while guaranteeing the quality of multi-user communications. We formulate signal models for the reception at both the communication users and the BS, from which we derive performance metrics for communication and radar sensing, respectively. More specifically, the CRB for the target DoA estimation in this considered active RIS-empowered ISAC system is meticulously derived for the first time, which is quite different from the CRB for passive RIS-assisted ISAC systems.
B
The choice of μ𝜇\muitalic_μ here is also not unique. The selection adopted in this paper aims to satisfy the condition τμ2⁢(1+τ)=1𝜏superscript𝜇21𝜏1\frac{\tau}{\mu^{2}(1+\tau)}=1divide start_ARG italic_τ end_ARG start_ARG italic_μ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( 1 + italic_τ ) end_ARG = 1, thereby simplifying the step of computing λik+1superscriptsubscript𝜆𝑖𝑘1\lambda_{i}^{k+1}italic_λ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k + 1 end_POSTSUPERSCRIPT and reducing the number of uniform parameters that nodes need to preset.
Besides, in practical calculations, the utilization of a self-adjustment technique for parameter tuning becomes pivotal, exerting a profound influence on computational efficiency. We hereby provide a more specific pseudocode of PPCM.
Building on the theoretical foundations of proximal point algorithms and projection contraction methods, this study innovatively proposes an Adaptive Projection-based Prediction-Correction Method (PPCM), specifically designed to address structured monotone variational inequality problems. This method leverages only the gradient information of the objective function for computation, significantly simplifying the computational process and enhancing the practical applicability of the approach. The selection of algorithm parameters is clear and concise, ensuring ease of operation and implementation while maintaining superior algorithm performance. The design of the adaptive adjustment criteria is both intuitive and convenient, and the theoretically established convergence properties provide a solid guarantee for the algorithm’s stability and reliability. Moreover, careful enhancements to PPCM enable it to effectively tackle distributed consensus optimization problems, broadening its range of applications. The decentralized nature of PPCM is evidenced by its reliance on local information for network updates, further augmenting the algorithm’s flexibility and autonomy. Through a series of numerical experiments, the exemplary efficiency and reliability of this method have been thoroughly demonstrated. Looking forward, we anticipate delving deeper into the challenges of distributed optimization and are committed to exploring advanced distributed optimization algorithms that support asynchronous iterations, with the aim of further advancing this field.
A comparison of the data from these two tables further reveals that, compared to the ring graph, the complete graph structure shows significant improvements in efficiency and accuracy. This is because, in a more tightly knit network graph, each node receives a larger and more comprehensive amount of information in each communication round, greatly facilitating rapid convergence of the algorithm.
Upon a detailed analysis of the data presented in several tables, it is evident that PPCM demonstrates a significant efficiency advantage over WAGM and built-in library functions. Specifically, PPCM not only achieves a speed improvement of at least four times compared to WAGM but also exhibits an acceleration ratio up to twelve times when compared to built-in library functions. Notably, while significantly accelerating computation speed, PPCM still maintains a high level of accuracy, whereas WAGM suffers a substantial loss in precision. Despite PPCM requiring two rounds of communication per iteration, compared to the single round needed by WAGM, the total communication cost of PPCM is actually lower due to its fewer required iterations.
A
In Section 4.2, we show that the limit points of the convergent subsequences of the measure flow for the finite population problem (as N𝑁Nitalic_N grows) correspond to an optimal measure flow for the infinite population problem.
We now introduce the symmetric policies that will be used by the agents for the finite population setting. We will focus on the optimal policies for infinite population under the discounted cost criteria. Let Θ⁢(d⁢u,d⁢x)Θ𝑑𝑢𝑑𝑥\Theta(du,dx)roman_Θ ( italic_d italic_u , italic_d italic_x ) be an optimal state-action distribution for some measure μ∈𝒫⁢(𝕏)𝜇𝒫𝕏\mu\in{\mathcal{P}}(\mathds{X})italic_μ ∈ caligraphic_P ( blackboard_X ). We then write
In this section, we focus on the effect of using symmetric policies for finite population control problem. The following example shows that the symmetric policies may not achieve the optimal performance, and personalized policies have to be used for the optimality.
Finally, in this section, we study the relation between the finite population control and the infinite population control. In particular, we will show that the value function of the N𝑁Nitalic_N- population problem converges to the value function of the infinite population problem. We will show that the accumulation points of the optimal state-action distributions for the N𝑁Nitalic_N-population problem are optimal state-action distribution for the infinite population problem. Furthermore, we will establish that one can symmetrically use policies designed for the infinite population problem for the finite population control with near optimality if the population is sufficiently large.
In Section 4.3, we first provide an example that shows that the optimal policies for the finite population setting may have to be personalized and asymmetric. We then establish the near optimality of the symmetric policies designed for the infinite population problem, when they are used for the finite population problem under the discounted cost criteria, for sufficiently large N𝑁Nitalic_N. We finally show that if the discount factor β𝛽\betaitalic_β is sufficiently close to 1111, then this symmetric policy will achieve near optimal performance under the ergodic cost criteria as well.
D
Training loss is a linear combination of an AR transformer decoder loss and an NAR transformer decoder loss.
Meanwhile, the first layer of acoustic tokens of the timbre prompt is used as the prefix in AR decoding.
In the AR transformer decoder, we do not explicitly select an utterance as the timbre prompt in training, which means all acoustic tokens of the first layer are predicted with the teacher-forcing technique.
In the end, the first layer of acoustic tokens predicted by the AR transformer decoder and the remaining layers of acoustic tokens predicted by the NAR transformer decoder are concatenated to form the predicted acoustic tokens.
Then the outputs of the encoder are fed into the acoustic decoder along with the acoustic tokens of the timbre prompt to generate speech with the same timbre as the timbre prompt.
B
In Eq. (2), class conditional probability, marginal probability, and entropy are denoted by p⁢(y∣𝐱)𝑝conditional𝑦𝐱p(y\mid\mathbf{x})italic_p ( italic_y ∣ bold_x ), p⁢(y)𝑝𝑦p(y)italic_p ( italic_y ), and H⁢(x)𝐻𝑥H(x)italic_H ( italic_x ) for image samples x𝑥xitalic_x.
The inter-class mode collapse is measured by the IS score. IS computes the Kullback–Leibler divergence between the class conditional probability p⁢(y∣𝐱)𝑝conditional𝑦𝐱p(y\mid\mathbf{x})italic_p ( italic_y ∣ bold_x ) of each generated image and the marginal probability p⁢(y)𝑝𝑦p(y)italic_p ( italic_y ) calculated from a group of images generated from all classes. IS measures the lowest score as 1 and higher as equal to the number of classes. It estimates the upper bound as the higher score shows that the model can generate diversified and high-quality images. IS shows an acceptable correlation with the diversity and quality of generated images and uses a pre-trained Inception-Net for the assessment of generated images. It is defined in Eq. (2):
The intra-class diversity of synthetic images generated by the DCGAN and ACGAN is also improved using the AIIN normalized X-ray images as shown by the improved FID scores. FID analysis indicates the efficacy of using AIIN with GANs in improving the diversity of synthetic images. For the ACGAN, the IS and FID analyses show that the AIIN has a limited impact on alleviating the inter-class mode collapse using the AIIN for generating diversified synthetic X-ray images. The discriminator of the ACGAN finds it difficult to classify synthetic images from real images of binary classes. The AIIN has normalized the images of both classes where features of X-ray images of one class resemble the X-ray image features of the other class. So, the discriminator sends similar gradient feedback to the generator, which generates identical images repeatedly.
The occurrence of intra-class mode collapse is identified using MS-SSIM and inter-class mode collapse using IS scores. The diversity of generated synthetic images is assessed by the FID score. These metrics enable the evaluation of a GAN’s capacity to generate diversified synthetic images.
The intra-class and inter-class diversity of synthetic images is assessed by the FID and IS scores. IS estimates the diversity of synthetic images generated from multiple classes. It has some limitations as it only relies on generated synthetic images without comparing them to real images. Whereas, FID estimates the distance between feature activations of real and feature activations of synthetic images [48] from single class images. FID score ranges between 0.0 and +∞+\infty+ ∞ while a lower score indicates a higher diversity of synthetic images as compared to the real images. In this work, 1340 samples were selected from real images and 1340 samples from synthetic images for measuring FID scores. IS score is measured using 1000 images from each class of generated images.
D
T(S)={|ψ⟩,s.t.M|ψ⟩=|ψ⟩,∀M∈S}T(S)=\{|\psi\rangle,s.t.\,M|\psi\rangle=|\psi\rangle,\forall M\in S\}italic_T ( italic_S ) = { | italic_ψ ⟩ , italic_s . italic_t . italic_M | italic_ψ ⟩ = | italic_ψ ⟩ , ∀ italic_M ∈ italic_S }
operator U𝑈Uitalic_U that depends only on the time instances t1subscript𝑡1t_{1}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and
Suppose M∈S𝑀𝑆M\in Sitalic_M ∈ italic_S and Pauli operator E𝐸Eitalic_E anti-commutes with M𝑀Mitalic_M.
syndrome as shown in Table I. Each bit in the 5-bit syndrome represents whether the corresponding stabilizer commutes with the error. If it commutes, the bit is 0, else it is 1. It should also be observed that each syndrome
commutes with M𝑀Mitalic_M, M⁢(E⁢|ψ⟩)=E⁢M⁢|ψ⟩=E⁢|ψ⟩𝑀𝐸ket𝜓𝐸𝑀ket𝜓𝐸ket𝜓M(E|\psi\rangle)=EM|\psi\rangle=E|\psi\rangleitalic_M ( italic_E | italic_ψ ⟩ ) = italic_E italic_M | italic_ψ ⟩ = italic_E | italic_ψ ⟩;
B
The development set contains 216 sessions with 20.3 hours of audio data. There are 1 to 20 speakers in each session of the development set. The test set contains 232 sessions with a total audio length of 43.5 hours. The speaker number for each session ranges from 1 to 21 in the test set.
we observed better DER by not applying the clustering to the detected speakers corresponding to the pseudo-speaker profiles.
The diarization error rates (DER) was calculated with a 0.25-second of tolerance collar by following the description of the VoxConverse dataset [22].
The independent speaker detection module consisted of a linear projection layer followed by BLSTM layers. The joint speaker detection module consisted of two blocks of sequence layers where a transformer layer is applied on the speaker-axis followed by a BLSTM layer across the time-axis [18]. The number of parameters of the PET-TSVAD model was 12.47M.
To examine the quality of the extracted speaker profiles, we computed the DERs of some of the clustering configurations on the development sets of both VoxConverse and DIHARD-I, which are shown in Table 2. It is observed that the AHC with threshold 0.9 and 0.92 achieved significantly better DER than the AHC with threshold 0.97, especially for the VoxConverse development set. From the results of Table 2, the overall quality of speaker profile of Config. 2 was higher than that of Config. 1, although Config. 1 included slightly higher percentage of oracle speaker profiles. In addition, the Config. 2 was more diverse than Config. 1 in terms of error patterns in speaker profiles as it included NME-SC and more thresholds for AHC.
B
Following the satisfactory performance achieved by MOSA-Net+ in the previous experiments, we next evaluate MOSA-Net+ on VoiceMOS Challenge 2023. For VoiceMOS Challenge 2023, MOSA-Net+ is exclusively trained using the noisy-and-enhanced track provided by the organizing committee. The goal of the noisy-and-enhanced track is to estimate the mean opinion score (MOS) of the quality score. Therefore, we selected MOSA-Net+ without domain adaptation to deploy the model, considering its best performance in the previous experiments. In detail, the setup involves employing cross-domain features, specifically a combination of PS+LFB+WS, as the acoustic features. The model architecture selected for training is CNN-BLSTM with an attention mechanism, and a multi-task model architecture is also utilized during the training phase. In this context, the model is trained using both MOS and intelligibility scores as labels, following the objective function defined in Eq. 1. However, during inference, we use the model to estimate the MOS score. In addition, we set γ1=1subscript𝛾11\gamma_{1}=1italic_γ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 1, γ2=1subscript𝛾21\gamma_{2}=1italic_γ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 1, and 0.00001 for the learning rate.
This paper presents MOSA-Net+, an improved version of MOSA-NET that predicts human-based speech quality and intelligibility. MOSA-Net+ uses a well-known weakly supervised model (Whisper) to generate cross-domain features. The model employs a CNN-BLSTM architecture with an attention mechanism and is trained using a multi-task learning approach to predict subjective listening test scores. Experimental results show that incorporating Whisper’s embedding features notably improves the robustness of MOSA-Net+. Additionally, combining the embedding features from Whisper and SSL models only results in a marginal improvement. Furthermore, when evaluated on the TMHINT-QI dataset, MOSA-Net+ outperforms MOSA-Net, MOS-SSL, and several intrusive metrics in all evaluation metrics for predicting quality and intelligibility scores. Finally, in the noisy-and-enhanced track of VoiceMOS Challenge 2023, MOSA-Net+ can achieve the best performance among nine systems. In the future, we plan to explore the potential of Whisper in developing a robust speech assessment model for more unseen tasks. Meanwhile, we will also explore a direct integration of the speech assessment model with speech processing applications.
Following the satisfactory performance achieved by MOSA-Net+ in the previous experiments, we next evaluate MOSA-Net+ on VoiceMOS Challenge 2023. For VoiceMOS Challenge 2023, MOSA-Net+ is exclusively trained using the noisy-and-enhanced track provided by the organizing committee. The goal of the noisy-and-enhanced track is to estimate the mean opinion score (MOS) of the quality score. Therefore, we selected MOSA-Net+ without domain adaptation to deploy the model, considering its best performance in the previous experiments. In detail, the setup involves employing cross-domain features, specifically a combination of PS+LFB+WS, as the acoustic features. The model architecture selected for training is CNN-BLSTM with an attention mechanism, and a multi-task model architecture is also utilized during the training phase. In this context, the model is trained using both MOS and intelligibility scores as labels, following the objective function defined in Eq. 1. However, during inference, we use the model to estimate the MOS score. In addition, we set γ1=1subscript𝛾11\gamma_{1}=1italic_γ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 1, γ2=1subscript𝛾21\gamma_{2}=1italic_γ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 1, and 0.00001 for the learning rate.
In Table 4 222The evaluation for ranking was performed by the VoiceMOS 2023 Committee. [27], MOSA-Net+ exhibits superior performance compared to LE-SSL-MOS employing SSL fine-tuning with listener embedding, KAQ utilizing a stacking process, four other teams, and two baseline systems (UTMOS and SSL-MOS), showcasing a notable margin of improvement in all evaluation metrics. In addition, unlike the other mentioned systems (LE-SSL-MOS, UTMOS, and SSL-MOS), MOSA-Net+ is the only system that employs the Whisper model to generate the acoustic feature, whereas the other systems use SSL to generate the acoustic features. Therefore, it reaffirms the advantages of Whisper to provide decent acoustic features for better prediction capability of a non-intrusive speech assessment model.
The contribution of this study is twofold; first, we investigate the effectiveness of using the speech representations from Whisper in deploying a speech assessment model. Second, we explore the potential advantages of combining the embedding features from Whisper and SSL models while deploying MOSA-Net+. Experimental results in Taiwan Mandarin Hearing In Noise test - Quality & Intelligibility (TMHINT-QI) [25] dataset first confirmed that Whisper embedding features can improve prediction performance for deploying the MOSA-Net+ model. Second, combining Whisper and SSL embedding can improve performance, but the improvement is rather marginal. Meanwhile, MOSA-Net+ notably outperforms several intrusive methods, MOSA-Net, and the other SSL-based assessment models in estimating subjective quality and intelligibility scores across all evaluation metrics, confirming Whisper’s potential to provide more robust acoustic features [26]. In order to further validate its performance, MOSA-Net+ was evaluated in the noisy-and-enhanced track of the VoiceMOS Challenge 2023 [27], emerging as the top-performing model among nine systems.
C
To further investigate the impact of the order in which the spatio-temporal adapter is built, we conducted a comparison by reversing the order of spatial and temporal attention. The quantitative evaluation results, presented in Table 3, reveal that initiating with spatial attention, followed by temporal attention, leads to a decrease of 0.5% and an increase of 0.02, Dice and temporal consistency metric. In comparison, the concurrent application of spatial and temporal attention, as Fig 1, results in a reduction of 0.4% in the dice coefficient and a 0.02% increase in temporal smoothness. Based on these findings, we have chosen to structure the temporal and spatial attention.
To evaluate the efficacy of the multi-scale fusion approach, we conduct an ablation study focusing on the multi-scale fusion encoder and the mask decoder components. By eliminating both the multi-scale image encoder and the mask decoder, we demonstrate its effectiveness on segmentation performance. As illustrated in Table 4 (row 3 and 4), without multi-scale fusion leads to a decline in segmentation outcomes by 1.1 %, also with an increase in temporal smoothness by 0.03. Finally, we obtain the best results when both, spatio-temporal adapter and multi-scale fusion are activated.
As discussed in the preceding section, to leverage the advantages of both CNN and the ppretrained SAM, we have devised an image encoder and made modifications to SAM’s lightweight mask decoder. In the image encoder, we employ a CNN-based encoder to extract features while progressively downsampling input image. Within the modified SAM’s mask decoder, consisting of two layers of transpose convolution, we employed four CNN decoder layers to interact with multi-scale encoder features while keeping the prompt encoder architecture. This effectively combine the strengths of both CNN and self-attention mechanisms, thereby enhancing the model’s performance in feature extraction and segmentation tasks. Our experimental evaluations have demonstrated the improved performance of our model, as illustrated in Table 4, confirming the effectiveness of our multi-scale fusion strategy.
In this section, we conduct a comprehensive examination of the various elements that make up our proposed model. We begin our assessment by evaluating the effectiveness of the spatio-temporal design and then proceed to assess the impact of integrating multi-scale fusion. We also evaluated the impact of pretrained SAM on model performance.
In this paper, we propose a efficient architecture with parameter-efficient adaptation method to adapt SAM from 2D to the medical video 2DT segmentation task especially for echocardiography. Distinct from previous SAM based adaptation method, we embedded 2DT input with a simple CNN encoder. We freeze the SAM’s encoder and leverage it as the feature extractor to utilize SAM’s generalizable knowledge learned from large dataset. To learn temporal relation efficiently between each frames, we divided the 2DT input along the time dimension, and then the temporal adapter to generate the feature maps. We propose a temporal adapter consist of global and local temporal attention blocks. This strategy proves to behave well to overcome the limitation the discontinuity issue caused by frame by frame segmentation, and also improve overall segmentation results. Lastly, we utilize lightweight mask decoder deisign with multi-scale fusion to deal with varying object and ROI scale in echo scan. We conduct experiments on echocadiography segmentation datasets with comprehensive comparisons with domain SOTA approaches, as well as adapters in general. The results show our methods can outperform existing methods by a large margin, and notably our method shows significantly high performance on zero-shot analysis in-hospital dataset from Massachusetts General Hospital (MGH) and Brigham and Women Hospital. Our contributions are summarized as follows:
A
The benefit of ILM subtraction becomes generally smaller, and the best results with ELM are on par with full-context transducer.
Various sequence discriminative training settings yield similar performance as the CE base model with ILM subtraction, which indicates the claimed correlation also for context-1 subword transducers.
In this work, we showed the strong correlation between ILM subtraction and sequence discriminative training for subword-based neural transducers.
Further applying these ILMs to the CE base model for ILM subtraction in recognition also gives the same performance.
Table 1 shows the comparison between ILM subtraction and sequence discriminative training for the full context RNN-T model. As expected, for the baseline model trained with CE, methods utilizing ELM yield substantial improvements over the standalone RNN-T system, and ILM subtraction approaches further improve the performance compared to the SF approach. For the models trained with sequence discriminative training criteria, the performance of SF is already comparable to the CE base with ILM subtraction. Moreover, when applying ILM subtraction on MMI/MBR-trained models, the improvement is notably smaller compared to the improvement seen in the CE-trained model, which indicates that the ILM is suppressed during sequence discriminative training.
A
Openface 2.0: Facial Behaviour Analysis toolkit [22] was used to extract 10 Facial Action Units (FAUs) that capture the coordination of lip and near-lip movements when the subjects are speaking [23] from the video recordings. The extracted FAUs were used for the classification models.
From the TVs extracted from audio and the FAUs extracted from video, a high-level correlation structure was computed based on the work in Huang et al. [24], which calculates correlations starting from 0 to a delay of ’D’ frames. The delayed autocorrelation and cross-correlations across TVs in the segments are stacked together to create the Full Vocal Tract Coordination (FVTC) correlation structure. A grid search from (45,50,55) was done to pick the ’D’ parameter used to create the correlation structures and the best unimodal performance was achieved with D=50 for TVs and D=45 for the FAUs.
scale, and to sample as many as possible different points along the auto and cross-correlation matrix following the work of Huang et al. [24]. As the goal is to perform session-level classification, the output of the first fully connected layer in the segment-level classifier is taken across all segments of a session, stacked together, and then sent through a convolution neural network which produces a session-level label.
As audio-based features, vocal tract variables (TVs) were extracted from the acoustic signals of the segmented audio recordings using an acoustic-to-articulatory speech-inversion system [14]. A total of 6 TVs were extracted. In addition, two other glottal parameters, aperiodicity, and periodicity were also extracted using an Aperiodicity, Periodicity, and Pitch detector [15]. This voicing information has been shown to improve the accuracy of the speech inversion system that estimates the 6 TVs [14] and the accuracy of mental health classification systems[16]. Since previous studies in detecting mental health disorders (depression and schizophrenia) [6, 17] have shown that the Mel-Frequency Cepstral Coefficients (MFCCs) and TVs outperform features like extended Geneva Minimalistic Acoustic Parameter Set (eGeMAPS) [18] and DEEP SPECTRUM features [19] in audio-based prediction tasks, those features were not used. However, baseline models were also trained using self-supervised audio features like Wav2Vec [20] and HuBERT[21] for comparison.
Openface 2.0: Facial Behaviour Analysis toolkit [22] was used to extract 10 Facial Action Units (FAUs) that capture the coordination of lip and near-lip movements when the subjects are speaking [23] from the video recordings. The extracted FAUs were used for the classification models.
A
(12) and (13) are calculated to obtain m𝑚mitalic_m MVs, which require at least 2msuperscript2𝑚2^{m}2 start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT complex multiplications to compute the magnitude squares of the elements of the received sequence. Hence, the computation complexity at the receiver is 𝒪⁢(2m)𝒪superscript2𝑚\mathcal{O}(2^{m})caligraphic_O ( 2 start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ), i.e., a linearly increased complexity with the length of CS.
In Section V, the convergence of the UAV waypoint flight control is discussed. In Section VI, we assess the proposed scheme numerically. We conclude the paper in Section VII. A summary of the notation used throughout the paper is given in TABLE I.
In Figure 1, we provide the transmitter and receiver block diagrams for the proposed OAC scheme. The k𝑘kitalic_kth sensor first estimates the position of the UAV, e.g., by using some image processing [41, 42]. It computes the vector vk(ℓ)superscriptsubscriptv𝑘ℓ{\textit{{v}}}_{k}^{(\ell)}v start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( roman_ℓ ) end_POSTSUPERSCRIPT and calculates tk(ℓ)superscriptsubscriptt𝑘ℓ\textit{{t}}_{k}^{(\ell)}t start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( roman_ℓ ) end_POSTSUPERSCRIPT based on Theorem 1 by using the mapping in (9). It then maps the elements of the encoded CS tk(ℓ)superscriptsubscriptt𝑘ℓ\textit{{t}}_{k}^{(\ell)}t start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( roman_ℓ ) end_POSTSUPERSCRIPT to 2msuperscript2𝑚2^{m}2 start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT OFDM subcarriers and calculates the N𝑁Nitalic_N-point inverse discrete Fourier transform (IDFT) of the mapped CS. All the sensors transmit their signals along with a sufficiently large CP duration for OAC. The UAV receives the non-coherently superposed signal. After discarding the CP and calculating the DFT of the remaining received samples, it obtains En+subscriptsuperscript𝐸𝑛E^{+}_{n}italic_E start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and En−subscriptsuperscript𝐸𝑛E^{-}_{n}italic_E start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT. The UAV finally detects the MVs with (14) and updates its position based on (1). We discuss the detector performance in (14) rigorously in the following section.
Now the convergence of the system under the OAC (MV) strategy is presented in the following theorem:
In this section, we discuss the convergence of the resulting systems under the control strategies in (3) by analyzing their Lyapunov stability based on the following definition:
D
Due to the extremely short wavelengths, massive arrays, and limited communication distances, a substantial portion of practical THz scenarios falls within the near-field. Consequently, THz systems are expected to operate within both near-field and far-field regions, leading to the emergence of a novel paradigm termed cross-field THz communications [13]. Moreover, the coexistence of far and near-field channel paths is also highly probable, giving rise to the concept of hybrid-field THz communications. The spherical wave model is appropriate for channel modeling, estimation, and baseband signal processing in such cases. Accounting for the near-field THz channel structures provides more spatial degrees of freedom in comparison to far-field channels. Such spatial richness offers novel opportunities for resource bit-mapping and introduces novel dimensions–such as distance–contributing to enhanced source parallelizability. Higher channel diversity is expected across the UM-MIMO near-field channels, which, in our context, translates to richer PSI generation at detector outputs.
Guessing random additive noise decoding (GRAND) recovers code-words by guessing rank-ordered putative noise sequences and reversing their effect until valid code-words are obtained. There are several arguments in favor of adopting GRAND in THz-band, Tbps settings. First, GRAND is a universal decoding mechanism that can decode any block-code construction, enabling low-cost reconfigurable architectures that can support the requirements of diverse emerging THz communications applications. GRAND particularly performs well with short moderate-redundancy codes that naturally arise from parallelizable baseband architectures. GRAND has also demonstrated good performance leveraging PSI on additive-noise statistics and channel-state information in fading channels. The performance of the hardware-friendly ordered reliability bits GRAND (ORBGRAND) under PSI extracted from linear ZF and MMSE equalization approximates the performance of state-of-the-art decoders of CRC-assisted polar (CA-polar) and Bose–Chaudhuri–Hocquenghem (BCH) codes that avail of complete soft information [3].
To improve performance while addressing these concerns, we propose incorporating channel state information (CSI) and additive noise statistics into channel bit mapping and code design, then leveraging CSI in PSI for a single-shot (no iterations) data detection and decoding mechanism. PSI can be expressed as the effective signal-to-noise ratio (SNR) following detection processing [3], and it only depends on the channel. Such information can thus be computed once over a channel coherence time/bandwidth, significantly reducing complexity. Per realization, only hard-output bits will be passed from the detector to the decoder. The need for generating per-bit soft-detection reliability information in both linear and non-linear detectors is elevated at a graceful performance cost when PSI is rich. PSI richness is related to bit mapping design, where combining bits from different transmission sources in a single code-word is favored. Additionally, leveraging the channel structure in PSI eliminates the need for costly bit-interleaving and noise-whitening operations.
The results in Fig. 4 further illustrate the performance of the proposed parallelizability framework under the same THz indoor channel conditions with SC-List and GRAND decoding of polar codes (K/N=116/128𝐾𝑁116128K/N\!=\!116/128italic_K / italic_N = 116 / 128), assuming ZF detection in a 4×4444\!\times\!44 × 4 MIMO system. PSI captured most of the structure in the channel, resulting in a marginal gap of 2⁢dB2dB2\,\mathrm{dB}2 roman_dB compared to soft decoding. GRAND further outperforms SC-List decoding by 2⁢dB2dB2\,\mathrm{dB}2 roman_dB as it jointly considers CRC and polar code redundancies. Interestingly, the complex soft SC-List decoding (with a list size of 16161616) is matched by low-complexity PSI-based GRAND decoding, further highlighting the importance of using noise-centric decoders in this range of high rates and short codes.
The more we know about the structure and correlation in THz channel and noise models, the better GRAND can be optimized for THz scenarios. The prospects of GRAND are benchmarked to other decoding schemes in Table I. It is worth mentioning, however, that maximum-likelihood GRAND can result in random runtime as the number of guesses can vary between realizations of channel use. To bound latency, we have to sometimes terminate guesswork early, which is referred to as GRAND with abandonment. Furthermore, other variations of universal noise-centric decoders also offer performance and complexity trade-offs worth investigating in a THz-band, Tbps context, such as variations of ordered statistic decoding (OSD) [14], which has demonstrated good performance in decoding short codes.
A
System identification, even in linear settings, does not automatically preserve properties like dissipativity and passivity without explicit constraints, even if the original system is known a priori to possess such properties. While identification of stable models has been studied for several decades, system identification approaches that preserve system dissipativity and passivity properties have only been investigated in the context of linear systems (see [28] for a comprehensive survey), linear approximations for nonlinear systems [29, 30], and Koopman operator models [31, 32]. Learning stable neural ordinary differential equation (ODE) models has been achieved through neural Lyapunov functions or Lyapunov constraints (see [33] for a compilation of works addressing this topic). There is also some recent work on learning dissipative neural dynamics limited to specific port-Hamiltonian network structures; further, these models only apply when the system inputs are constant [34]. Dissipativity theory for neural dynamical systems has also been confined to special cases such as Lyapunov stability for autonomous systems, that is, systems without inputs. [35]. The problem of learning provably dissipative deep neural dynamical models for general nonlinear systems, especially in the closed-loop setting, remains an open problem. The key challenge lies in imposing matrix inequality constraints, such as those required to guarantee dissipativity, during deep NN training; this is a hard problem with no known solution.
As discussed earlier, there is no guarantee that the identified neural dynamical system (13) is incrementally dissipative (Definition 1) even if the unknown nonlinear system (1) is known to be dissipative. One approach to obtain a dissipative model is to constrain the NN parameters θ𝜃\thetaitalic_θ during training. However, typical neural ODE learning algorithms cannot directly handle constraints during training. Further, guaranteeing dissipativity properties such as (10) on the trained model requires imposing matrix inequality constraints on the training of neural ODE models; this is a complex problem for which no known algorithms exist. To address this issue, we propose an algorithm to perturb the parameters of the the baseline model post-training to guarantee incremental dissipativity, while retaining the fit of the learned model.
Then, we perturb its weights to enforce incremental dissipativity. We would ideally like to minimize the dissipativity-enforcing weight perturbation, in order to maintain the closeness of the learned model to the behavior of the nonlinear system.
In this work, we address the particular problem of learning a dissipative neural dynamical model for a nonlinear system that is known to satisfy an incremental dissipativity property. We propose a two-stage solution to address this problem. First, we train an unconstrained feedforward deep neural ODE model using input-output trajectories from the nonlinear system. Next, we derive sufficient conditions on the weights of the NN to guarantee incremental dissipativity of the learned model, and pose an optimization problem to minimally perturb the weights to enforce these conditions. Finally, we adjust the biases, as necessary, to retain the fit of the dissipative neural dynamical model to the ground truth.
Second, we propose an algorithm where dissipativity can be imposed by perturbation of the weights alone, allowing us to independently tune the biases to retain the fit of the model to the true system dynamics without losing our dissipativity guarantee. To the best of our knowledge, this is the first work to develop algorithms that preserve the input-output property of dissipativity in identification of neural ODEs (where the theory has been limited to autonomous systems).
C
The paper is organized as follows. The next section is dedicated to mathematical preliminaries on harmonic modelling. In Section III, we state the identification problem. The main results are established in Section IV where we tackle the approximation of the infinite-dimensional identification problem. Illustrative examples are given in Section V, demonstrating the application of our approach to identify linear time-periodic systems, even in scenarios where state measurements are affected by noise.
To simplify the notations, Lp⁢([a,b])superscript𝐿𝑝𝑎𝑏L^{p}([a,b])italic_L start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT ( [ italic_a , italic_b ] ) or Lpsuperscript𝐿𝑝L^{p}italic_L start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT will be often used instead of Lp⁢([a,b],ℂn)superscript𝐿𝑝𝑎𝑏superscriptℂ𝑛L^{p}([a,b],\mathbb{C}^{n})italic_L start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT ( [ italic_a , italic_b ] , blackboard_C start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ).
then, as X𝑋Xitalic_X and U𝑈Uitalic_U are absolutely continuous functions of the time, the supremum on any compact set exists and using Theorem 2 the result follows.
Notations: Casuperscript𝐶𝑎C^{a}italic_C start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT denotes the space of absolutely continuous function,
We say that X𝑋Xitalic_X belongs to H𝐻Hitalic_H if X𝑋Xitalic_X is an absolutely continuous function (i.e X∈Ca⁢(ℝ,ℓ2⁢(ℂn))𝑋superscript𝐶𝑎ℝsuperscriptℓ2superscriptℂ𝑛X\in C^{a}(\mathbb{R},\ell^{2}(\mathbb{C}^{n}))italic_X ∈ italic_C start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT ( blackboard_R , roman_ℓ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( blackboard_C start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ) ) and fulfills for any k𝑘kitalic_k the following condition:
C
With more than 7,000 languages around the globe, most languages still lack adequate support from speech technologies [mms]. In recent years, both the academic and industrial communities have displayed considerable interest in multilingual automatic speech recognition (ASR) [18Multi, 20MMASR, 21scaling, 21XLSR, 22JUST, 22Whisper] to expand language coverage. These studies can be broadly categorized into two types: one involves supervised or semi-supervised learning of end-to-end ASR systems using multilingual data [18Multi, 19Largescale, 20MMASR, 21scaling, 22Device, 22Whisper], while the other utilizes self-supervised learning (SSL) techniques to learn meaningful multilingual generalized representations from vast amounts of unlabeled data[21XLSR, 21UniSpeech, 22JUST, 22XLS-R, mms, xue2023tranusr, 23USM]. Specifically, the latter utilizes SSL to create generalized representations, which often exhibit superior performance when lacking enough labelled data in low-resource languages.
We evaluate the performance of the SSHR on two multilingual datasets, ML-SUPERB [23MLSUPERB] and Common Voice [19commonvoice]. Our contributions can be summarized as follows: (1) We analyze the layer-wise representations of MMS and discover that the middle layers contain more language-related information. The middle and high layers contain more content-related information, but the final layers will be lost.
The complexity of constructing a multilingual ASR system arises from the need to accommodate significant variations in acoustic, linguistic, and semantic across diverse languages [23HierCTC]. Consequently, the key to achieving a successful multilingual ASR system is ensuring the model can accurately recognize and transcribe specific languages. This can be achieved by exploring language-related information from SSL’s middle layers. With language identification (LID) in place, the subsequent challenge is utilizing the content-related information to perform downstream ASR tasks more accurately. To achieve satisfactory results in fine-tuning, the final layers of the SSL must contain substantial content-related information.
This paper proposes self-supervised hierarchical representations (SSHR) for improving multilingual ASR. Our approach encompasses three key refinements during the fine-tuning process of MMS [mms]. First, we extract a LID-related frame from the middle layers and concat it into encoder frames to guide specific language content extraction in the subsequent layers.
While fine-tuning SSL for downstream tasks is a simple and effective approach, research has proven that limited relevant information is available at the final layers of the SSL model [21layerwise]. Research in SSL representations has unveiled a notable correlation between the middle layers and language-related information [22LID]. Additionally, the middle and high layers tend to encapsulate more content-related information [21layerwise]. However, this content-related information diminishes as we progress through the model’s final layers. Although the effectiveness of current SSL models like Massively Multilingual Speech (MMS) [mms] have been proven in multilingual ASR, the extent to which the layers of the SSL model contribute to the specific task still needs to be explored. Consequently, the optimal utilization of SSL’s hierarchical representations to enhance the fine-tuning performance of downstream multilingual ASR tasks presents an unresolved challenge.
D
The other two cases concern nonoptimal time of flight, tf<tf*subscript𝑡𝑓subscriptsuperscript𝑡𝑓t_{f}<t^{*}_{f}italic_t start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT < italic_t start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT and tf>tf*subscript𝑡𝑓subscriptsuperscript𝑡𝑓t_{f}>t^{*}_{f}italic_t start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT > italic_t start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT, for which the convexification can be lossy.
The results are summarized in Figure 5 and Table 3, which presents the final position and velocity as well as the fuel consumption for the mission.
The simulation scenario is based on the Mars soft landing mission presented in [17], and the simulation parameters are summarized in Table 2.
Note that the constraints in (1a) and (1b) are nonlinear and that the lower bound on the thrust in (1c) is nonconvex.
In this test, we configured the parameter α𝛼\alphaitalic_α in (1b) to have a very small value, as the weight of the drone remains constant regardless of thrust usage. Furthermore, we incorporated a dynamic adjustment to the maximum allowable tilt angle of the thrust vector, which progressively decreases as the vehicle approaches the landing pad [42]. The parameters used for the flight test is given in Table 4.
B
Machine learning (ML) offers an alternative by shifting the computational burden to offline training, thereby making dynamic decision making via the online application of ML algorithms computationally feasible. Recent works propose ML for solving MIPs and combinatorial optimization (CO) [4], either in an end-to-end fashion or to accelerate traditional solvers. Graphs play a central role in formulating many CO problems [5], representing paths between entities in routing problems, or interactions between variables and constraints in a general CO [6, 7]. The use of Graph Neural Networks (GNNs) is also being explored to leverage the underlying graph structure during training and identify common patterns in problem instances. The traveling salesman problem (TSP) is a fundamental problem in CO and a standard benchmark which has been extensively studied with traditional optimization techniques. Recently, GNNs have been used to solve the TSP with good performance and generalizability [8, 9, 10]. In this work we leverage GNNs to learn the power flow representation for reconfiguration.
Grid reconfiguration for distribution grids has been studied with varying solution methodologies including knowledge-based algorithms and single loop optimization [1, 11], heuristic methods [12, 13], and reformulation as a convex optimization problem using big-ℳℳ\mathcal{M}caligraphic_M constraints [14, 15, 16]. However, these methods are not computationally tractable for large-scale optimization in close to real-time applications, and may be limited to passive grids (i.e. no local generation). Machine learning approaches for DyR have also been proposed [17, 3]. In [17] the DyR problem is formulated as a Markov decision process and solved using reinforcement learning. In [3] a light-weight physics-informed neural network is proposed as an end-to-end learning to optimize framework with guarantee certified satisfiability of the power physics. A physics-informed rounding layer explicitly embeds the discrete decisions within the neural framework. These approaches show potential, but both are limited to a given grid topology and switch locations. Our approach is similar to that of [3] wherein we embed discrete decisions directly within an ML framework.
We propose GraPhyR, a physics-informed machine learning framework to solve (1)-(11). Our framework in Fig. 1 features four architectural components: (A) gated message passing to model switches, (B) local predictions to scale across nodes, (C) physics-informed rounding to handle binary variables, and (A) topology input data for adaptability during online deployment. We embed the physics of the distribution grid and reconfiguration problem within each component of the GraPhyR framework. First, the GNN embeds the topology of the underlying distribution grid, and explicitly models the switches using gated message passing. Second, the topology selection embeds the discrete open/close decision of the switches using the physics-informed rounding. Third, we use the power flow equations to predict a subset of variables (denoted as the independent variables), and compute the remaining variables in a recovery step. The GraPhyR framework uses these physics-informed layers to learn to optimize the reconfiguration task while satisfying equality and binarity constraints. The framework is presented in detail next.
The main contribution of this paper is GraPhyR, a graph neural network (Gra) framework employing physics-informed rounding [3] (PhyR) for DyR in distribution grids. GraPhyR is an end-to-end framework that learns to optimize the reconfiguration task, enabled by four key architectural components:
We developed GraPhyR, an end-to-end physics-informed Graph Neural Network framework to solve the dynamic reconfiguration problem. We model switches as gates in the GNN message passing, embed discrete decisions directly within the framework, and use local predictors to provide scalable predictions. Our simulation results show GraPhyR outperforms methods without GNNs in learning to predict optimal solutions, and offers significant speed-up compared to traditional MIP solvers. Further, our approach adapts to unseen grid conditions, enabling real-world deployment. Future work will investigate the scalability of GraPhyR to larger grids (200+ nodes), approaches to reduce inequality constraint violations, and regularization strategies to improve topology prediction. Finally, further efforts are needed in developing good datasets with representative timeseries data in distribution grids.
A