context
stringlengths 100
5.69k
| A
stringlengths 100
3.76k
| B
stringlengths 100
3.61k
| C
stringlengths 100
5.61k
| D
stringlengths 100
3.87k
| label
stringclasses 4
values |
---|---|---|---|---|---|
2) Under a special case where the state prediction MSE matrix is ignored, the optimal relative motion state is obtained and proved to maintain a fixed elevation angle and zero relative velocity between the UAV and the object.
|
3) Simulation results verify that when the predicted measurement MSE dominates the predicted PCRBs, the solution for the weighted sum-predicted PCRB minimization can be approximated by the optimal relative motion state obtained under the considered special case, and further illustrate three interesting trade-offs achieved by the fixed elevation angle.
|
Under the predicted measurement MSE-dominant case, the solution can be approximated by the optimal relative motion state obtained under the measurement MSE-only case, which is proved to sustain a fixed elevation angle and zero relative velocity.
|
Furthermore, simulation results validate the effectiveness of the proposed tracking scheme and the approximation as well as three interesting trade-offs on system performance achieved by the fixed elevation angle.
|
2) Under a special case where the state prediction MSE matrix is ignored, the optimal relative motion state is obtained and proved to maintain a fixed elevation angle and zero relative velocity between the UAV and the object.
|
A
|
In short, we simulate, through spectral ray tracing, the effect that an imperfect (i.e., aberrated) optical system has on the RGB image measured when observing a specific spectral scene. The details of this simulation can be found in the Supplemental Material.
|
Figure 5. Training MST++ with metamers. It fails to combat fixed metamers and on-the-fly metamers, in particular on the spectral accuracy SAM.
|
In Fig. 6, we show an example with MST++ for the validation on SAM in two situations, one with fixed metamers, and the other with on-the-fly metamers. In each experiment, we compare the SAM difference with and without aberrations. As a reference, we also show the standard validation without metamers. As we can see, the realistic optical aberrations of the lens actually improve the spectral estimation in the presence of metamers as long as the aberrations are modeled in the training. With chromatic aberrations,
|
In addition, we inspect the effects of different datasets (cf. Table 1) on the performance. We train the MST++ network on the four datasets with the same image simulation parameters. To eliminate the impact of other factors, we choose the ideal noiseless and aberration-free condition without compression. In the validation, we use our trained model on ARAD1K dataset to validate on the other three datasets, respectively. In Table 4, we compare the performance with the models both trained and validated on the original datasets. Results for other networks can be found in the Supplement. They all illustrate the same difficulties in generalization.
|
We also train all other candidate networks with fixed and on-the-fly metamers. The results are summarized in Table 6. Again, the same performance drop applies to all networks. Finally, we show the results of the top-performing network, MST++ on the CAVE, ICVL, and KAUST datasets in Table 7. (See Supplemental Material for more results). As before, the performance drops similarly in the presence of metamers.
|
B
|
≈WTabsent𝑊𝑇\approx WT≈ italic_W italic_T over the complex field ℂ, for large WT𝑊𝑇WTitalic_W italic_T [101].
|
the complex samples rm,kusubscriptsuperscript𝑟u𝑚𝑘r^{\rm u}_{m,k}italic_r start_POSTSUPERSCRIPT roman_u end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m , italic_k end_POSTSUBSCRIPT are quantized with a finite number of bits. Using sample-per-sample dithered scalar quantization, [100] identifies in the quantization squared distortion D𝐷Ditalic_D the key parameter for fronthaul quantization optimization. Fixing D𝐷Ditalic_D, the resulting
|
The general communication theory developed for CF-MMIMO applies to RadioWeaves in the same way as it does for Radio Stripes and pCell.
|
given by (17), that are produced (at rate of 1 sample per s×\times×Hz) by AP m𝑚mitalic_m in relation to its connected user k𝑘kitalic_k. For all AP m∈𝒜k𝑚subscript𝒜𝑘m\in\mathcal{A}_{k}italic_m ∈ caligraphic_A start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT forming the cluster of user k𝑘kitalic_k, these
|
As usual in communication theory, taking this limit with equality, we identify bit per signal dimensions, i.e., per complex baseband sample,
|
D
|
The first diagnostic capabilities are instilled by a nonlinear disturbance observer for ASVs developed by [14]. As a result, estimations of unknown environmental forces on an ASV impacted by wind, waves, and sea currents are accessible. These observed disturbances extend the SITAW of an ASV. This extended knowledge can subsequently be used for improved control considering the environmental impact. In addition, monitoring the condition of various components of the vessel is a crucial core competence considering capability level 2. This might be the detection of faults in an integrated power system, such as the diagnosis of propulsion branch faults [15]. Initial solutions with regard to condition monitoring have already been developed. However, such internal SITAW of the ASV is not yet implemented in the DT framework, but the general idea of using thermal cameras to track the conditions of an engine is presented in Fig. 4. In summary, a diagnostic DT is responsible for troubleshooting and risk assessment.
|
The predicted safe path is shown in green, while the red ending shows a critical predicted unsafe region. Such predictive capabilities can reduce risk and guarantee the safety of potential real-world applications. Since this DT is not connected to the real-world ASV, no loop is closed yet. However, the work shows, in general, what a DT for ASVs and their environment could be capable of. As shown in the previous work, other real-time sensor data, such as AIS and weather data, are streamed into the DT. These capabilities facilitate an extended SITAW for improved risk assessment and safer control.
|
Considering the capability scale of a DT presented in Section 2.1, the existing DT framework is extended by the first predictive and prescriptive capabilities. In addition to the numerically stable ellipse fitting approach for other objects, predictive target tracking using Kalman filters and the probabilistic sensor fusion technique given in (36) allows for path predictions of other vessels, including their cumulative uncertainty estimate. Furthermore, the predictive safety filter (PSF) integrates an additional security factor into the DT, enabling proactive SITAW and COLAV of ASVs. These integrations can guarantee safer sea operations through the prescriptive analysis of what-if scenarios, allowing the RL-driven control approach for a significant risk reduction. Even if several capabilities are already integrated into the DT, each individual capability level has the potential for improvements and extensions. Additional physics and data-driven models, additive data sources for model corrections, and more intelligent algorithms are planned to be integrated in the future.
|
This work extends this existing framework through improved autonomous, prescriptive, and predictive capabilities. For this purpose, we introduce the theory and deployment of a predictive target tracking method enabling the estimation and prediction of the position and motion related to other dynamic objects using AIS data and synthetic light detection and ranging (LiDAR) measurements. In addition, we introduce the concept of a predictive safety filter (PSF) based on the theory of nonlinear model predictive control (NMPC) for safe control of ASVs. Both methodologies, novel with respect to DTs, are implemented in the extendable DT framework and finally depicted in the results section. Therefore, we introduce the necessary preliminaries in Section 2, followed by the novel methods and extensions of the DT described in Section 3.
|
Level 3 introduces predictive capabilities. Regarding ASVs, predicting the position and motion of other dynamic objects can be essential for proactive control. In addition, weather forecasts and predictive condition monitoring of vessel components can reduce the risk of critical outages during open-sea operations. While no predictive capabilities have been implemented yet, this work addresses this field by implementing a predictive target tracking (PTT) approach and a predictive safety filter (PSF), described in the following sections. The acquired knowledge from PTT extends the SITAW and can be used for the PSF for safe autonomous path following and COLAV.
|
D
|
A common bottleneck for IMDP tools is memory consumption, which is only exacerbated on a GPU, as they generally have less memory available.
|
In Figure 4(a), we see that IntervalMDP.jl substantially outperforms the other tools in terms of computation time. For instance, computing the query for the largest model takes bmdp-tool 6865686568656865s and PRISM 1235123512351235s, while IntervalMDP.jl takes 372372372372s and 30303030s for the CPU and GPU implementation respectively. This is a speed-up of 228×\times× and 41×\times× of the GPU implementation of IntervalMDP.jl relative to bmdp-tool and PRISM respectively.
|
However, due to the CSC-format with Float64 values and Int32 indices, IntervalMDP.jl generally requires less memory compared to PRISM and bmdp-tool. For example, to run value iteration on the largest model (i.e., pimdp_2 in Table 1 in Appendix B), IntervalMDP.jl requires 4.884.884.884.88GB of memory. In contrast, PRISM uses 6.326.326.326.32GB of memory and bmdp-tool uses 5.385.385.385.38GB to run value iteration on the same problem. This is a 23% reduction relative to PRISM and 9% relative to bmdp-tool, which is including the Julia runtime.
|
We evaluate IntervalMDP.jl on various benchmarks and compare it with PRISM (Kwiatkowska et al., 2011) and bdmp-tool (Lahijanian, 2021), the only tools available to perform reachability analysis for IntervalMDP.jl to the best of our knowledge. The benchmarks include 35 IMDPs taken from the literature, with the total number of transitions between states ranging from a few tens for the smaller models to tens of millions for the larger models. The empirical analysis shows that IntervalMDP.jl CPU implementation is on average 2222-4×4\times4 × faster compared to the state of the art, while the GPU implementation can achieve speed-ups of various orders of magnitude on the larger systems. Furthermore, because of the use of sparse matrices and the Julia type system, in all cases, IntervalMDP.jl requires less memory compared to PRISM and bmdp-tool.
|
In order to show the effectiveness of IntervalMDP.jl, we compare it against bmdp-tool and PRISM, which, to the best of our knowledge, are the only existing tools that support value iteration for IMDPs.
|
B
|
This section describes how we trained our DATUM and other models. Except for turbulence mitigation networks [28, 77, 46], we also benchmarked several representative video restoration [39, 40] and deblurring networks [79, 81] for a more thorough comparison.
|
To train the proposed model, we used the Adam optimizer [31] with the Cosine Annealing learning rate schedule [41]. The initial learning rate is 2×10−42superscript1042\times 10^{-4}2 × 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT, and batch size is 8. All dynamic scene TM networks in this experiment are trained end-to-end from scratch for 800K iterations. To get their static-scene variant, we fine-tuned them on the static-scene modality with half the initial learning rate and 400K iterations. We clip the gradient if the L2 norm exceeds 20 to prevent gradient explosion during inference.
|
After feature registration and deep integration, we propose to augment the embedding with contra-directional information, which is essential to ensure consistent restoration quality across various frames. In addition, like classical methods, a spatially adaptive fusion with adjacent frames is advantageous. We propose the Multi-head Temporal-Channel Self-Attention (MTCSA), as illustrated in Fig. 2. The MTCSA begins by concatenating channels from multiple frames, followed by a 1×1111\times 11 × 1 convolution to shrink the channel dimension. Separable convolution is used to construct the spatially varying query, key, and value on the temporal and channel dimensions, and the dynamic fusion is facilitated by self-attention. Finally, a residual connection is used to stabilize training. Considering the quadratic complexity of MTCSA relative to window size, this size is kept moderate. Additionally, we integrate a hard-coded positional embedding wherein features from the focal frame are positioned at the end. This strategy is essential for boundary frames with disproportionate neighboring frames on either side.
|
We first trained and evaluated all networks for comparison on a previous Zernike-based synthetic dataset [77] for preliminary study. We choose PSNR and Complex Wavelet Structure Similarity [62] (CW-SSIM) as the criterion in this paper, and the reason for selecting CW-SSIM rather than SSIM is provided in the section A.3. The result in Table 1 shows our DATUM outperforms the previous state-of-the-art TMT [77] with 5×5\times5 × fewer parameters and over 10×10\times10 × faster inference speed. We also benchmark a representative single-frame TM network [46] to demonstrate the superiority of multi-frame TM methods.
|
With the proposed simulator, we created the ATSyn dataset to match various real-world turbulence conditions and benchmark deep neural networks for turbulence mitigation. This dataset is segmented into two distinct subsets based on scene type: the ATSyn-dynamic and ATSyn-static. The dynamic sequences contain camera or object motion, whereas the static sequences are each associated with only one underlying clean image. We adopted parameters including focal length, F-number, distance, wavelength, scene size, and sensor resolution to control the simulation. In comparison with the synthetic dataset introduced in [77], which utilized the D/r0𝐷subscript𝑟0D/r_{0}italic_D / italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT [20] and empirically chosen blur kernel size, our dataset’s parameter space more closely aligns with actual camera settings, making it more representative.
|
A
|
Nevertheless, achieving satisfactory results in controlling LDM for audio generation tasks remains challenging.
|
Vision Language Models like ChatGPT-4, which demonstrated advanced multi-modal abilities and inspired vision-language LLMs.
|
ClipSonic [8] learns the text-audio correspondence by leveraging the audio-visual correspondences in videos and the multi-modal representation learned by pre-trained VLMs.
|
In this paper, we propose SonicVisionLM, which utilizes the capabilities of powerful vision-language models (VLMs). When provided with a silent video, SonicVisionLM first identifies events within the video using a VLM to suggest possible sounds that match the video content.
|
AudioGen [19] treats audio generation as a conditional language modelling task, while the other three models employ latent diffusion methods to accomplish sound generation.
|
A
|
On the BEAT dataset, we outperform CaMN on the gesture BA and get a comparative BA with other diffusion baselines. As for the Div score, ours is higher than CaMN but lower than other diffusion baselines. This aligns with the qualitative observations that CaMN has slow motions and other diffusion baselines suffer from jittering.
|
On the SHOW dataset, our DiffSHEG is consistently better than TalkSHOW and LS3DCG on both BA and Div, except that LS3DCG has a higher BA which may be due to the jittering. The Div of our DiffSHEG for both expression and gesture achieves a similar score to that of real data, indicating that the diversity of our generated results can achieve a realistic level on the SHOW dataset.
|
Table 1: Quantitative comparison and ablation study. On the BEAT [26] dataset, we compare our DiffSHEG with CaMN [26], DiffGesture [54], DiffuseStyleGesture (DSG) [47] and LDA [1] with audio and person ID as input. Note that the baseline methods are originally for gesture generation solely, and we apply the same procedure independently for expression generation. On the SHOW [49] dataset, we compare with LS3DCG [14] and TalkSHOW [49]. The ablation studies are conducted on both datasets to demonstrate the effectiveness of our UniEG-Transformer design. Note that we use SRGR on the BEAT dataset and PCM on SHOW dataset. *: indicates that the results are computed using the pre-trained checkpoints provided by authors of TalkSHOW [49].
|
The results in Table 1 demonstrate our method can achieve state-of-the-art performance compared with other baselines on both datasets. We consistently outperform the baseline methods on Fréchet distance (FMD, FED, and FGD) by a large margin, indicating the strong distribution matching ability of DiffSHEG, especially the expression-gesture joint distribution. For SRGR and PCM of gestures, we outperform all the baselines except for the LS3DCG which generates jittering motion.
|
Figure 5: Motion Comparison on the SHOW [49] Dataset. Our method generates more expressive and diverse motions than TalkShow [49] and LS3DCG [14] in terms of both gesture and head pose diversity. Our results also show more agile motions than baselines.
|
A
|
Both image texture and sparse noise contained in 𝑰hfresubscript𝑰hfre\boldsymbol{I}_{\text{hfre}}bold_italic_I start_POSTSUBSCRIPT hfre end_POSTSUBSCRIPT are effective for attacking. Image texture witnesses a small SROCC and larger MAE value, which implies it plays a more important role. Meanwhile, when both image texture and sparse noise are utilized (i.e. the whole 𝑰hfresubscript𝑰hfre\boldsymbol{I}_{\text{hfre}}bold_italic_I start_POSTSUBSCRIPT hfre end_POSTSUBSCRIPT is used), the attack performance achieves the best among them. It confirms the role of image texture and sparse noise in high-frequency images when used as the initial attack direction.
|
To examine the effectiveness of different parts of our attack method, we conduct a detailed performance analysis by attacking the DBCNN model within the LIVE dataset for different settings in Table VI. The original performance on unattacked images is shown in part A of Table VI.
|
TABLE III: Black-Box attack performance with different settings of score boundaries. Experiments are conducted on attacking the DBCNN model within the LIVE dataset.
|
TABLE VI: Black-Box attack performance with different settings. The experiments are conducted on attacking the DBCNN model within the LIVE dataset.
|
TABLE IV: Black-Box attack performance with different settings for 𝒅texsubscript𝒅tex\boldsymbol{d}_{\text{tex}}bold_italic_d start_POSTSUBSCRIPT tex end_POSTSUBSCRIPT. Experiments are conducted on attacking the DBCNN model within the LIVE dataset.
|
A
|
Hierarchical Skip Connections. As shown in Fig. 4, unlike previous audio-visual masked autoencoders which solely operate on encoder representation from the last layer and neglect explicit guidance on other layers [18, 63, 19, 20], HiCMAE incorporates hierarchical skip connections between intermediate encoder and decoder layers to explicitly steer encoder feature learning of different levels and assist the decoder to complete the task of masked audio-visual reconstruction.
|
We develop three versions of HiCMAE (i.e., base: HiCMAE-B, small: HiCMAE-S, tiny: HiCMAE-T) to meet various needs in real-world applications. Their main difference is the size of hidden units (C=512𝐶512C=512italic_C = 512, C=384𝐶384C=384italic_C = 384, and C=256𝐶256C=256italic_C = 256, respectively) in the encoder. For three models, we use Ns=10subscript𝑁𝑠10N_{s}=10italic_N start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = 10 layers in modality-specific encoders, Nf=2subscript𝑁𝑓2N_{f}=2italic_N start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT = 2 layers in the cross-modal fusion encoder, and Nd=4subscript𝑁𝑑4N_{d}=4italic_N start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT = 4 layers in the lightweight decoder. We introduce three hierarchical skip connections between the modality-specific encoder and the decoder, specifically between the 4th encoder layer and the 2nd decoder layer, the 7th encoder layer and 3rd decoder layer, as well as between the 10th encoder layer and the 4th decoder layer. The hierarchical cross-modal contrastive learning is also applied to these selected audio-visual encoder layers.
|
We then investigate the effect of different types of information flow in the cross-modal fusion encoder. We develop three variants of the default information flow in Eq. (5-6) and show their differences in Fig. 5. Specifically, for the raw-input variant, tokens of one modality in each fusion layer always attend to the raw input tokens of the other modality [51], instead of updated tokens from the last layer. For the video-first variant, video tokens first update themselves via audio information from the last fusion layer and then audio tokens attend to the updated video tokens. The audio-first variant is just the reverse of the video-first variant. The ablation results are presented in Table 16. We observe that the model performance is not sensitive to different types of information flow in the cross-modal fusion encoder. Besides, in general, the default information flow works best, followed by the video-first and audio-first variants, and finally the raw-input variant.
|
The audio and video encoders consist of Nssubscript𝑁𝑠N_{s}italic_N start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT Transformer layers. Each Transformer layer is mainly composed of Multi-Head Self-Attention (MHSA) and Feed-Forward Network (FFN):
|
Specifically, HiCMAE adds an MHCA layer before each (except for the first) Transformer layer in the decoder.
|
D
|
subsurface offsets ranging from −500m500m-500\mathrm{m}- 500 roman_m to +500m500m+500\mathrm{m}+ 500 roman_m are
|
subsurface offsets ranging from −500m500m-500\mathrm{m}- 500 roman_m to +500m500m+500\mathrm{m}+ 500 roman_m are
|
During VI, the posterior distribution p(𝐱|𝐲)𝑝conditional𝐱𝐲p(\mathbf{x}|\mathbf{y})italic_p ( bold_x | bold_y ) is
|
p𝜽(𝐱|𝐲)subscript𝑝𝜽conditional𝐱𝐲p_{\boldsymbol{\theta}}(\mathbf{x}|\mathbf{y})italic_p start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT ( bold_x | bold_y ), with learnable
|
statistics (SSIM=0.48SSIM0.48\mathrm{SSIM}=0.48roman_SSIM = 0.48); (d) conditional mean estimate from
|
D
|
The proposed channel estimation method can be efficiently operated under the two-phase communication protocol in Fig. 4. The key idea is based on the fact that the coherence time of the BS-RIS channel (denoted by Tfsubscript𝑇𝑓T_{f}italic_T start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT) is commonly much longer than that of the RIS-User channels (denoted by Thsubscript𝑇ℎT_{h}italic_T start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT). Accordingly, 𝐒colsubscript𝐒col{\bf S}_{\rm col}bold_S start_POSTSUBSCRIPT roman_col end_POSTSUBSCRIPT and 𝐓[k,ℓ]subscript𝐓𝑘ℓ{\bf T}_{[k,\ell]}bold_T start_POSTSUBSCRIPT [ italic_k , roman_ℓ ] end_POSTSUBSCRIPT’s are changed every Tfsubscript𝑇𝑓T_{f}italic_T start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT and Thsubscript𝑇ℎT_{h}italic_T start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT time slots, respectively. Following this protocol, the overall training overhead becomes lower as Tfsubscript𝑇𝑓T_{f}italic_T start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT grows. Namely, in the second phase, we only require the NBr𝑁subscript𝐵𝑟NB_{r}italic_N italic_B start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT training overhead, instead of MRFBc+NBrsubscript𝑀RFsubscript𝐵𝑐𝑁subscript𝐵𝑟M_{\rm RF}B_{c}+NB_{r}italic_M start_POSTSUBSCRIPT roman_RF end_POSTSUBSCRIPT italic_B start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT + italic_N italic_B start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT. In our simulations in Section V, the effectiveness of the two-phase communication protocol will be demonstrated. Moreover, the two-phase communication protocol can reduce the computational complexity in (42) since in this case, the impact of the complexity 𝒪(M3)𝒪superscript𝑀3\mathcal{O}\left(M^{3}\right)caligraphic_O ( italic_M start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ) can be negligible. ■■\blacksquare■
|
In this section, we verify the superiority of the proposed channel estimation method (named CLRA-JO) for XL-RIS assisted XL-MIMO systems, by evaluating its performances in the following categories of wireless channels:
|
We studied the channel estimation problem for XL-RIS assisted multi-user XL-MIMO systems with hybrid beamforming structures. In this system, we proposed the unified channel estimation method (named CLRA-JO) which can perform in the far- and near-field channels without any modification. Whereas, in the existing CS-based methods, dictionary should be designed by taking into account the characteristics of the near- and far-field channels. Via simulations and complexity analysis, it is demonstrated that the proposed CLRA-JO can yield better estimation accuracy than the state-of-the-art CS-based methods while having lower training overhead (e.g., the 80%percent8080\%80 % reduction of the pilot overhead). Our on-going work is to design channel state information (CSI) feedback suitable for the proposed CLRA JO so that it can be applicable in frequency-division-duplexing (FDD)-based XL-RIS assisted multi-user XL-MIMO systems.
|
In this section, we evaluate the performances of the proposed channel estimation method for RIS-aided mmWave MU-MIMO systems with hybrid beamforming structures. Regarding the wireless channels in our simulations, we consider the XL-RIS assisted XL-MIMO and RIS-aided massive MIMO systems, defined in Section II-B. In Section V-A, we demonstrate the superiority of the proposed CLRA-JO for XL-RIS assisted XL-MIMO systems by comparing with the state-of-the-art (SOTA) CS-based methods in [13]. Remarkably, for the first time, we take into account the near-field BS-RIS and near-field RIS-User channels (in short, near-near field channel). In Section V-B, we then verify the practicality of the proposed CLRA-JO via experiments on real 28GHz UPA channel data in [44]. Following the performance metric in the related works [12, 13, 29, 28], we employ the normalized mean square error (NMSE) for the evaluation of channel estimation accuracy, given by
|
Motivated by the above, in this paper, we study the channel estimation problem for XL-RIS assisted multi-user XL-MIMO systems with hybrid beamforming structures. Noticeably, it is assumed that both the BS and the RIS are equipped with extremely large-scale antenna array (ELAA). Accordingly, we consider the following categories of wireless channels: i) Far-field BS-RIS and far-field RIS-User channels (i.e., far-far field channel); ii) Far-field BS-RIS and near-field RIS-User channels (i.e., far-near field channel); iii) Near-field BS-RIS and near-field RIS-User channels (i.e., near-near field channel). Beyond the existing works in [32, 31], we for the first time investigate the channel estimation method for near-near field channels. In addition, as an extension of the existing works, the proposed channel estimation method can be performed when each user is equipped with a multiple antenna.
|
C
|
The current research has multiple limitations. Speaker classification accuracy requires improvement and current results were based on expert speaker classification. Machine learning of Whisper transcriptions to bolster speaker classification is one promising direction of research. Transcription results were relatively accurate but were limited to the teacher or child wearing the recorder, suggesting that the distance between mouth and recorder microphone might be a crucial variable. However, speech feature results showed impressive correspondence between automated and expert measurement both at an utterance level and an audio recording level.
|
The current results suggest a framework for advancing the automated analysis of classroom speech. High-quality recorders worn by teachers and children yielded reliable automated transcription of what individuals said. Moreover, there were high levels of agreement in automated versus expert measurements of key features of their speech including MLU and question-asking. We have applied the automated pipeline to a dataset of classroom recordings from 13131313 children and 3333 teachers observed on 17171717 occasions over one year. This dataset contains 765765765765 hours of recordings and required 200200200200 hours of processing time with the current automated pipeline. The automated pipeline yielded over half a million transcribed utterances, which we are currently analyzing. Thus, automated methods show great potential for producing datasets with which to understand interaction and development in naturalistic contexts.
|
Speaker classification results indicated moderate levels of overall reliability for both teacher and child utterances. Word error rates–a rigorous metric–indicated relatively high levels of transcription accuracy for both teacher and child recordings. Comparison of expert and automated processing was not limited to standard reliability metrics such as word error rate, but was extended to the speech features that are the likely substantive areas of analysis for classroom interaction research. These analyses used all available audio from both teacher and child recorders. The results suggest promising levels of correspondence on teacher and child MLU, rate of speech, use of questions, and responses to questions. However, each of these features must be examined individually. For example, our data suggest that Whisper over-estimated lexical alignment between teacher utterances followed by child utterances. This may be due to the tendency of this large language model to “hear” words in child utterances that had been identified in the previous teacher utterances.
|
Manual speaker classification and transcription is often a limiting factor in understanding classroom speech. For example, expert transcription of the 110110110110 minutes of audio data reported here took approximately 55555555 hours (5555 hours of expert transcription per 10 minutes of audio). Researchers have begun to tackle this problem through automated quantification of selected speech features from classroom audio [21, 22, 20]. We add to this literature with an automated framework for the large-scale analysis of speaker classification and transcription (who said what).
|
In this section, we introduce a comprehensive framework, as shown in Fig. 1, for processing and analyzing large-scale adult-child vocal interactions, leveraging the capabilities of automated language processing tools. We first assess the reliability of the automated pipeline by comparing its output to manual transcription and speaker classification by a human expert. The framework integrates machine learning-based language models for voice transcription (Whisper) and speaker classification (ALICE), which are compared to human expert analysis. In the reliability analysis, we align and synchronize the outputs from the machine learning models with human expert results to evaluate accuracy and consistency. This integrated approach is aimed at providing reliable tools that developmental researchers can use to examine large-scale classroom recordings. To that end, we describe preliminary substantive findings from our reliability analyses.
|
A
|
This research study was conducted retrospectively using human subject data made available in open access by Bilic et al. [2]. Ethical approval was not required as confirmed by the license attached with the open-access data.
|
From the table, we can also observe that the proposed PVTFormer showcased remarkable performance, demonstrating the highest dice coefficient of 86.78%, mIoU of 78.46%, recall of 80.70%, precision of 96.11%, F2 score of 82.86%, and a low HD score of 3.50. From the overall comparison, it can be demonstrated that PVTFormer outperformed eight state-of-the-art medical image segmentation architectures. This can also be observed from the qualitative results where the proposed model successfully captures intricate details 2. Notably, our proposed method captures fine details and contextually significant features, surpassing the CNN-based architectures like ResUNet++[16], ColonSegNet [17], and NanoNet [18] and transformer based approaches such as TransNetR [20] and TransResUNet [21]. While comparing the computational complexity, TransNetR operates with 10.58 GMac flops and utilizes 10.58 million parameters, whereas PVTFormer uses 43.22 GMac and utilizes 45.51 million parameters. The higher computational resource is justified by the higher performance obtained by PVTFormer compared to TransNetR and other transformer and CNN-based approaches.
|
The project is supported by the NIH funding: R01-CA246704, R01-CA240639, U01-DK127384-02S1, and U01-CA268808.
|
The Up block acts as a scaling unit to increase the spatial dimensions of feature maps. It comprises an upsampling layer followed by a residual block. Within the Up block, the input feature map is first passed through a bilinear upsampling to upscale the feature map’s height and width to that of the original input image. The residual block, which consists of two convolutional operations with an identity mapping, refines the upscaled features, enabling the network to learn a more robust representation.
|
The liver is the largest solid organ in the human body, crucial for metabolic functions and digestive processes. Globally, liver cancer is the third leading cause of cancer-related deaths, highlighting its significant impact on public health [1]. The liver is also a common site for metastases from various abdominal cancers, such as colon, rectum, and pancreas, as well as distant cancers like breast and lung. Therefore, accurate segmentation of the liver is crucial for targeted therapies and surgical planning [2]. With advancements in medical imaging technologies such as computed tomography (CT) and magnetic resonance imaging (MRI), it is possible to visualize and segment the liver with precision, leading to more accurate diagnosis and treatment strategies [3].
|
B
|
All aspects of the environmental model are specified in the MiniZinc modeling language, translated to linear constraints and solved using MILP during the adaptation process.
|
H1: Our approach results in a higher cumulative utility throughout degradation and recovery processes than the existing method.
|
H2: Our approach results in a higher run-time overhead but it does not disrupt normal system operations.
|
RQ1: Does our approach achieve a higher overall system utility than existing state-of-the-art approaches?
|
We compare our approach against the state-of-the-art adaptation framework called TOMASys (Bermejo-Alonso et al., 2016). Our experimental results are promising, showing that our approach can achieve a higher level of requirement satisfaction throughout the adaptation process while incurring a reasonable amount of overhead.
|
C
|
Most charts, graphs, and tables are one column wide (3.5 inches / 88 millimeters / 21 picas) or page wide (7.16 inches / 181 millimeters / 43 picas). The maximum height a graphic can be is 8.5 inches (216 millimeters / 54 picas). When choosing the height of a graphic, please allow space for a caption. Figures can be sized between column and page widths if the author chooses, however it is recommended that figures are not sized less than column width unless absolutely necessary.
|
Figures (line artwork or photographs) should be named starting with the first 5 letters of the author’s last name. The next characters in the filename should be the number that represents the sequential location of this image in your article. For example, in author “Anderson’s” paper, the first three figures would be named ander1.tif, ander2.tif, and ander3.ps.
|
Color/Grayscale figures: Figures that are meant to appear in color, or shades of black/gray. Such figures may include photographs,
|
you do not need to position figures and tables at the top and bottom of each column. In fact, all figures, figure captions, and tables can be placed at the end of your paper. In addition to, or even in lieu of submitting figures within your final manuscript, figures should be submitted individually, separate from the manuscript in one of the file formats listed above in Section VI-J. Place figure captions below the figures; place table titles above the tables. Please do not include captions as part of the figures, or put them in “text boxes” linked to the figures. Also, do not place borders around the outside of your figures.
|
The proper resolution of your figures will depend on the type of figure it is as defined in the “Types of Figures” section. Author photographs, color, and grayscale figures should be at least 300 dpi. Line art, including tables should be a minimum of 600 dpi.
|
D
|
Similarly, safety filters based on control barrier functions, such as, e.g., Agrawal and Sreenath (2017), Greeff et al. (2021) and Didier et al. (2023), enforce a decrease of an explicit control barrier function in order to achieve stability of a safe set of states, possibly in addition to a Lyapunov function decrease, in the constraints.
|
In order to provide theoretical robust asymptotic stability guarantees of the resulting closed-loop system in Section 4, we require a Lyapunov function, which will be defined implicitly through the online optimization problem.
|
The presented work builds on predictive safety filters and is meant to provide an extension that guarantees not only constraint satisfaction, but stability of the underlying closed-loop dynamics. In particular, Wabersich and Zeilinger (2018) introduce the concept of predictive safety filters for linear systems with additive disturbances as an alternative to invariance-based safety filters such as, e.g., Akametalu et al. (2014) and Fisac et al. (2019), with an explicit specification of safe sets. The concept is then extended for nonlinear uncertain systems in Wabersich and Zeilinger (2021). An overview of different safety filter methods can be found in Wabersich et al. (2023).
|
for analyzing the proposed approach in the sense that it introduces the notion of augmented state-warmstart dynamics and difference inclusions that allow one to analyze the evolution of the system for any feasible solution and, in our case, any value of the proposed input. Our work differs from Allan et al. (2017), and all the suboptimal MPC literature covered by it, in the sense that i) we do not optimize for the same cost that defines the Lyapunov function, but rather for an arbitrary cost ii) we leverage so-called robust by design strategies rather than relying on inherent robustness properties.
|
With respect to these methods, we i) regard uncertain systems and provide robust constraint satisfaction guarantees ii) do not require that a Lyapunov function is computed explicitly, but rather implicitly through an MPC-like design.
|
D
|
Fig. 5 illustrates the trajectory optimization for a UAV-mounted FDR, comparing the solution from Algorithm 1 with the benchmark trajectories for which the benchmark TDMA scheme is applied, in a scenario with an increased noise power of −114114-114- 114 dB for an area of L𝐿Litalic_L equal to 750 meters. In this scenario, unlike in Fig. 4 where σ2=−144superscript𝜎2144\sigma^{2}=-144italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = - 144 dB and the UAV-mounted FDR remains stationary, the increased noise level results in areas where, if GNs are located within them, the received SNR falls below γthrsubscript𝛾thr\gamma_{\mathrm{thr}}italic_γ start_POSTSUBSCRIPT roman_thr end_POSTSUBSCRIPT, UAV movement is necessary to maintain effective communication. As it can be seen, for An=12subscript𝐴𝑛12A_{n}=12italic_A start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT = 12, the circular trajectory achieves a minimum rate of 0.51 Mbps, the rhombus 0.25 Mbps, and the spiral 0.28 Mbps, while the trajectory optimized through Algorithm 1 leads to a significantly higher rate of 12.28 Mbps. Moreover, for all examined trajectories, when the number of Ansubscript𝐴𝑛A_{n}italic_A start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT is less than 6, the network’s minimum rate tends towards zero, due to the increased probability of GNs experiencing outages and the UAV’s battery limitations, which are insufficient to enable the UAV to serve all GNs effectively. Therefore, Fig. 5 emphasizes the critical interplay between the number of antennas, UAV battery capacity, and trajectory optimization in ensuring robust network performance, especially in scenarios with challenging communication conditions such as increased noise or γthrsubscript𝛾thr\gamma_{\mathrm{thr}}italic_γ start_POSTSUBSCRIPT roman_thr end_POSTSUBSCRIPT thresholds. Finally, it should be noted that for the UAV-mounted RIS case, if σ2=−114dBsuperscript𝜎2114𝑑𝐵\sigma^{2}=-114dBitalic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = - 114 italic_d italic_B then rmin=0subscript𝑟min0r_{\mathrm{min}}=0italic_r start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT = 0 across all the feasible M𝑀Mitalic_M values, proving again the superiority of UAV-mounted FDR over UAV-mounted RIS in establishing communication link between the BS and the GNs.
|
−375375-375- 375−250250-250- 250−125125-125- 12500125125125125250250250250375375375375−375375-375- 375−250250-250- 250−125125-125- 12500125125125125250250250250375375375375Length (m)Width (m)GNBS
|
−375375-375- 375−250250-250- 250−125125-125- 12500125125125125250250250250375375375375−375375-375- 375−250250-250- 250−125125-125- 12500125125125125250250250250375375375375Length (m)Width (m)GNBS
|
−375375-375- 375−250250-250- 250−125125-125- 12500125125125125250250250250375375375375−375375-375- 375−250250-250- 250−125125-125- 12500125125125125250250250250375375375375Length (m)Width (m)GNBS
|
−375375-375- 375−250250-250- 250−125125-125- 12500125125125125250250250250375375375375−375375-375- 375−250250-250- 250−125125-125- 12500125125125125250250250250375375375375Length (m)Width (m)GNBS
|
A
|
IEMOCAP consists of 7433 utterances and 151 dialogues in 5 sessions, each involving two speakers per session. Each utterance is labeled as one of six emotional categories: happy, sad, angry, excited, frustrated and neutral. The train and development datasets consist of the first four sessions randomly divided at a 9:1 ratio. The test dataset consists of the last session.
|
We purposely exclude CMU-MOSEI (Zadeh et al., 2018), a well-known multimodal sentiment analysis dataset, as it comprises single-speaker videos and is not suitable for ERC, where emotions dynamically change within each conversation turn.
|
Recently, ERC has gained considerable attention in the field of emotion analysis. ERC can be categorized into text-based and multimodal methods, depending on the input format. Text-based methods primarily focus on context modeling and speaker relationships (Jiao et al., 2019; Li et al., 2020; Hu et al., 2021a). In recent studies (Lee and Lee, 2021; Song et al., 2022a), context modeling has been carried out to enhance the understanding of contextual information by pre-trained language models using dialogue-level input compositions. Additionally, there are graph-based approaches (Zhang et al., 2019; Ishiwatari et al., 2020; Shen et al., 2021; Ghosal et al., 2019) and approaches that utilize external knowledge (Zhong et al., 2019; Ghosal et al., 2020; Zhu et al., 2021).
|
Emotion recognition holds paramount importance, enhancing the engagement of conversations by providing appropriate responses to the emotions of users in dialogue systems (Ma et al., 2020). The application of emotion recognition spans various domains, including chatbots, healthcare systems, and recommendation systems, demonstrating its versatility and potential to enhance a wide range of applications (Poria et al., 2019). Emotion Recognition in Conversation (ERC) aims to identify emotions expressed by participants at each turn within a conversation. The dynamic emotions in a conversation can be detected through multiple modalities such as textual utterances, facial expressions, and acoustic signals (Baltrušaitis et al., 2018; Liang et al., 2022; Majumder et al., 2019; Hu et al., 2022b; Chudasama et al., 2022). Figure 1 illustrates an example of a multimodal ERC.
|
On the contrary, multimodal methods (Poria et al., 2017; Hazarika et al., 2018a, b; Majumder et al., 2019) reflect dialogue-level multimodal features through recurrent neural network-based models. Other multimodal approaches (Mao et al., 2021; Chudasama et al., 2022) integrate and manipulate utterance-level features through hierarchical structures to extract dialogue-level features from each modality. EmoCaps (Li et al., 2022) considers both multimodal information and contextual emotional tendencies to predict emotions. UniMSE (Hu et al., 2022b) proposes a framework that leverages complementary information between Multimodal Sentiment Analysis and ERC. Unlike these methods, our proposed TelME is one in which the strong teacher leads emotion recognition while simultaneously bolstering attributes from weaker modalities to complement and enhance the teacher.
|
A
|
For the SHS (II-B) with (10) and (11) under Assumptions 1-4 and Conditions 1-4, the closed set 𝒜𝒜\mathcal{A}caligraphic_A is uniformly globally attractive in probability.
|
i.e., the certification candidate 𝒰(x)𝒰𝑥\mathcal{U}(x)caligraphic_U ( italic_x ) is non-increased in expected value during jumps along solutions.
|
Proof: Following the radial unboundedness of 𝒰(x)𝒰𝑥\mathcal{U}(x)caligraphic_U ( italic_x ) and the upper bound of the expected value in (29), we obtain
|
for (t,j)∈(domxi)∩(Z~ki×ℤ≥0)𝑡𝑗domsubscript𝑥𝑖subscriptsuperscript~𝑍𝑖𝑘subscriptℤabsent0(t,j)\in(\mbox{dom}~{}x_{i})\cap(\tilde{Z}^{i}_{k}\times\mathbb{Z}_{\geq 0})( italic_t , italic_j ) ∈ ( dom italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ∩ ( over~ start_ARG italic_Z end_ARG start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT × blackboard_Z start_POSTSUBSCRIPT ≥ 0 end_POSTSUBSCRIPT ) and k∈ℤ≥0𝑘subscriptℤabsent0k\in\mathbb{Z}_{\geq 0}italic_k ∈ blackboard_Z start_POSTSUBSCRIPT ≥ 0 end_POSTSUBSCRIPT. The right-hand sides of (26) and (27) reflect the upper bounds of function 𝒰𝒰\mathcal{U}caligraphic_U in expected value respectively for the stable mode and the unstable mode, which together implies a new upper bound of 𝒰𝒰\mathcal{U}caligraphic_U in expected value for all (t,j)∈domxi𝑡𝑗domsubscript𝑥𝑖(t,j)\in\mbox{dom}~{}x_{i}( italic_t , italic_j ) ∈ dom italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, i.e.,
|
}_{i}+\chi_{i})caligraphic_U ( italic_x ) := italic_V ( over~ start_ARG italic_x end_ARG ) + ∑ start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT ( italic_γ start_POSTSUBSCRIPT italic_l start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_i end_POSTSUBSCRIPT italic_ϕ start_POSTSUBSCRIPT italic_l start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_i end_POSTSUBSCRIPT italic_W start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT + italic_χ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ), then 𝒰𝒰\mathcal{U}caligraphic_U is a certification candidate, i.e., 𝒰(x)∈𝒟(ℋ)𝒰𝑥𝒟ℋ\mathcal{U}(x)\in\mathcal{D}(\mathcal{H})caligraphic_U ( italic_x ) ∈ caligraphic_D ( caligraphic_H ). Indeed, it holds from (11) that G(D×𝒱)⊂𝕏𝐺𝐷𝒱𝕏G(D\times\mathcal{V})\subset\mathbb{X}italic_G ( italic_D × caligraphic_V ) ⊂ blackboard_X, which implies C1 holds, i.e., C∪D∪G(D×𝒱)⊂dom𝒰𝐶𝐷𝐺𝐷𝒱dom𝒰C\cup D\cup G(D\times\mathcal{V})\subset\hbox{dom}\mathcal{U}italic_C ∪ italic_D ∪ italic_G ( italic_D × caligraphic_V ) ⊂ dom caligraphic_U; as well, C2 holds from the fact that 𝒰(x)=0𝒰𝑥0\mathcal{U}(x)=0caligraphic_U ( italic_x ) = 0 iff x∈𝒜𝑥𝒜x\in\mathcal{A}italic_x ∈ caligraphic_A, i.e., L𝒰(0)=𝒜subscript𝐿𝒰0𝒜L_{\mathcal{U}}(0)=\mathcal{A}italic_L start_POSTSUBSCRIPT caligraphic_U end_POSTSUBSCRIPT ( 0 ) = caligraphic_A, and 𝒰(𝕏∖𝒜)>0𝒰𝕏𝒜0\mathcal{U}(\mathbb{X}\setminus\mathcal{A})>0caligraphic_U ( blackboard_X ∖ caligraphic_A ) > 0. Assumption 4 and Condition 4 imply that function 𝒰(x)𝒰𝑥\mathcal{U}(x)caligraphic_U ( italic_x ) is locally Lipschitz on an open set containing C∖L𝒰(0)𝐶subscript𝐿𝒰0C\setminus L_{\mathcal{U}}(0)italic_C ∖ italic_L start_POSTSUBSCRIPT caligraphic_U end_POSTSUBSCRIPT ( 0 ) and continuous on its domain; based on Assumption 1, [1, Lem. 4.1] implies that any upper semicontinuous (weaker than locally Lipschitz) function for ℋℋ\mathcal{H}caligraphic_H that satisfies C1-C2 is a certification candidate for ℋℋ\mathcal{H}caligraphic_H. Indeed, in C3 holds due to the upper semicontinuous function 𝒰(⋅)𝒰⋅\mathcal{U}(\cdot)caligraphic_U ( ⋅ ): −𝒰(⋅)𝒰⋅-\mathcal{U}(\cdot)- caligraphic_U ( ⋅ ) is a normal integrand since it is upper semicontinuous [14, Exam. 14.30], and in terms of [14, Exam. 14.32&Thm. 14.37], the measurability of v↦G(x,v)maps-to𝑣𝐺𝑥𝑣v\mapsto G(x,v)italic_v ↦ italic_G ( italic_x , italic_v ) and the outer semicontinuity of x↦G(x,v)maps-to𝑥𝐺𝑥𝑣x\mapsto G(x,v)italic_x ↦ italic_G ( italic_x , italic_v ) in Assumption 1 (Fig. 1) imply that the quantity ∫ℝmsupg∈G(x,v)𝒰(g)μ(dv)subscriptsuperscriptℝ𝑚subscriptsupremum𝑔𝐺𝑥𝑣𝒰𝑔𝜇𝑑𝑣\int_{\mathbb{R}^{m}}\sup_{g\in G(x,v)}\mathcal{U}(g)\mu(dv)∫ start_POSTSUBSCRIPT blackboard_R start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT end_POSTSUBSCRIPT roman_sup start_POSTSUBSCRIPT italic_g ∈ italic_G ( italic_x , italic_v ) end_POSTSUBSCRIPT caligraphic_U ( italic_g ) italic_μ ( italic_d italic_v ) is well-defined for each x∈D𝑥𝐷x\in Ditalic_x ∈ italic_D. Therefore, the partially Lipschitz function 𝒰(x)𝒰𝑥\mathcal{U}(x)caligraphic_U ( italic_x ) is a certification candidate relative to the closed set 𝒜𝒜\mathcal{A}caligraphic_A for ℋℋ\mathcal{H}caligraphic_H. Then we show the radial unboundedness of 𝒰(x)𝒰𝑥\mathcal{U}(x)caligraphic_U ( italic_x ). Define
|
B
|
Nviewsubscript𝑁𝑣𝑖𝑒𝑤N_{view}italic_N start_POSTSUBSCRIPT italic_v italic_i italic_e italic_w end_POSTSUBSCRIPT
|
Table 4: Comparison of different methods on sparse-view data corrupted by 5%percent55\%5 % Gaussian noises in terms of PSNR and SSIM.
|
Table 2: Evaluation results on the 150∘superscript150150^{\circ}150 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT limited-angle reconstruction problem corrupted by the mixed Gaussian and Poisson noises.
|
Table 6: Comparison of different methods on limited-angle data corrupted by 5%percent55\%5 % Gaussian noises in terms of PSNR and SSIM.
|
Table 5: Comparison of different methods on sparse-view data corrupted by 10%percent1010\%10 % Gaussian noises in terms of PSNR and SSIM.
|
C
|
At the conclusion of the study session, the smartwatch is removed, and the collected data is transferred to the cloud. The entire study session for each participant typically lasts between 25 to 30 minutes.
|
Following this baseline period, participants engage in activities that mimic sitting, standing, and walking, each lasting for five minutes. During the sitting activity, participants may perform tasks such as working on a laptop, writing, or using a mobile phone. In the standing activity, participants engage in actions like drinking water or having phone conversations. Finally, participants simulate walking for five minutes. At various intervals during these activities, participants are instructed to mimic coughing sounds.
|
This paper presents the results of a user study to evaluate the efficacy of a smartwatch’s cough detection. We developed a smartwatch app for data collection and conducted a user study during which participants performed different activities. Additionally, we have developed a highly accurate algorithm capable of identifying coughs and have implemented advanced clustering techniques to differentiate between different types of coughs. Our study yielded impressive results, achieving a remarkable accuracy rate of 98.49%percent\%% in detecting coughing events. Notably, our system demonstrated the capability to distinguish between four distinct types of cough with high precision. Through this work, we contribute to the evolving field of sound-based health monitoring, offering a valuable tool for healthcare professionals and individuals to maintain and improve respiratory health.
|
We enrolled thirty-two student participants in the age range of 20 to 28, with an emphasis on ensuring diversity within this particular age bracket. Our recruitment process was designed to create a representative sample of participants. The data collection phase spanned 28 days, during which all participants actively engaged in study-related activities.
|
Dataset: Participants for the study were recruited through email invitations, and individuals who expressed interest indicated their availability via a Google form. Subsequently, these participants visited our research laboratory to take part in the study. Before the study began, all participants provided signed consent forms as a prerequisite.
|
D
|
\boldsymbol{D}^{K-1}&\boldsymbol{D}^{K-2}&\ldots&\boldsymbol{I}\end{bmatrix}.over~ start_ARG bold_italic_D end_ARG = [ start_ARG start_ROW start_CELL bold_italic_I end_CELL start_CELL bold_0 end_CELL start_CELL … end_CELL start_CELL bold_0 end_CELL end_ROW start_ROW start_CELL bold_italic_D end_CELL start_CELL bold_italic_I end_CELL start_CELL … end_CELL start_CELL bold_0 end_CELL end_ROW start_ROW start_CELL ⋮ end_CELL start_CELL end_CELL start_CELL ⋱ end_CELL end_ROW start_ROW start_CELL bold_italic_D start_POSTSUPERSCRIPT italic_K - 1 end_POSTSUPERSCRIPT end_CELL start_CELL bold_italic_D start_POSTSUPERSCRIPT italic_K - 2 end_POSTSUPERSCRIPT end_CELL start_CELL … end_CELL start_CELL bold_italic_I end_CELL end_ROW end_ARG ] .
|
The contributions of this paper are twofold. First, we relax the assumption that the set of unknowns estimated by EM is purely continuous, allowing for a general set that comprises both continuous and discrete parameters. We derive mild conditions ensuring the convergence of EM to a stationary point of the likelihood function. Notably, when the unknowns belong to an open set, our results reduce to those of [6]. Second, we apply these results to establish the convergence of the EM-based SBL algorithm presented in [9].
|
Let {𝛉(r)}r=0∞superscriptsubscriptsuperscript𝛉𝑟𝑟0\{\boldsymbol{\theta}^{(r)}\}_{r=0}^{\infty}{ bold_italic_θ start_POSTSUPERSCRIPT ( italic_r ) end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_r = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT be the sequence generated by the EM algorithm, as summarized in (6), to solve the ML optimization problem in (2). Assume the following conditions,
|
We derived the conditions for the convergence of the EM algorithm with discrete unknown parameters. As an illustration, we demonstrated the convergence of the EM-based SBL algorithm outlined in [9], proving its convergence to the set of stationary points of the maximum likelihood cost. Extending the results to the generalized class of Majorization-Minimization algorithms is an interesting future work.
|
By simplifying (2) using (15), the objective function of an optimization problem equivalent to (2) reduces to
|
D
|
However, the computational complexity of Eqn. 2 is O(L2)𝑂superscript𝐿2O(L^{2})italic_O ( italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ).
|
The quadratic complexity makes it hard to employ the vanilla approach for high-resolution video coding.
|
The rapid development of social media and video applications triggers the increases of the video date volume, bringing challenges to video compression [1, 2].
|
indicating the effectiveness of employing global and local motion compensation for motion estimation in learning-based video coding.
|
However, the quadratic complexity of vanilla attention impedes the compression of high-resolution videos.
|
A
|
Based on the developed platform, users can engage in the following research and practices: (1) Prompt Engineering: Users can create appropriate scenario descriptions and prompt cues for custom scenarios, facilitating the utilization of (M)LLMs for vehicle control. (2) Model Evaluation: Users can evaluate the performance of various (M)LLM-based models in autonomous driving within scenarios involving intricate interactive processes. (3) Framework Enhancement: Users can enhance submodules within the closed-loop framework we provide to achieve improved performance.
|
The paper proposes an integrated platform called LimSim++ for scenario understanding, decision-making, and evaluation in autonomous driving with (M)LLMs. The platform is open-source and is expected to provide conditions for empowering future research on autonomous driving. The paper also presents a baseline (M)LLM-driven closed-loop framework with a memory mechanism that has been illustrated in experiments of various scenarios including intersections, roundabouts, and ramps.
|
Figure 1: Platform composition. LimSim++ is the first closed-loop evaluation platform specifically developed for (M)LLM-driven autonomous driving.
|
Introducing an open-source evaluation platform for (M)LLM in autonomous driving. LimSim++ is the first provision of an open-source evaluation platform specifically designed for the research of autonomous driving with (M)LLMs, supporting scenario understanding, decision-making, and evaluation systems.
|
While offline datasets contribute to refining the general capabilities of (M)LLMs for autonomous driving through fine-tuning, validating the model’s adaptability across diverse scenarios remains a formidable challenge. The knowledge-driven paradigm emerges as a promising direction for realizing autonomous driving [5], with its continuous learning hinging on continuous feedback within a closed-loop environment. Given the substantial cost associated with real-world testing [34], closed-loop simulation testing becomes an essential facet of autonomous driving technology [15, 35, 36]. This paper introduces LimSim++, designed to fulfill the research requirements of (M)LLM-driven autonomous driving. Tailored for end-to-end autonomous driving solutions, LimSim++ incorporates a unique combination of text and image prompt engineering and includes an optional module featuring a flexible and user-friendly driver agent.
|
C
|
In Section F.2, we provide an analysis of SNORE uncertainty to its random seed and its initialization.
|
In Section F.2, we provide an analysis of SNORE uncertainty to its random seed and its initialization.
|
In this section, we provide a theoretical analysis of our regularization SNORE and a convergence analysis of the associated algorithm.
|
Subsequent research effort should focus on quantifying the errors associated with SNORE in order to confirm its utility as a reliable reconstruction algorithm.
|
This sensitivity to errors is particularly pronounced in scenarios involving post-processing algorithms such as segmentation, detection, or classification applied to the reconstructed image, where errors in the reconstruction process may propagate into erroneous decision-making based on the image data.
|
C
|
The remainder of this paper is organized as follows. Section 2 reviews related works. Section 3 introduces the proposed approach. Section 4 presents the experimental setup and results. Finally, Section 5 concludes the paper and discusses future works.
|
To refine sEMG denoising performance, this study proposes SDEMG, a conditional score-based diffusion model for sEMG denoising. The proposed method progressively adds isometric Gaussian noise to the clean sEMG during the diffusion process. In the reverse process, we leverage the sEMG waveform contaminated by ECG and the noise scale variable as conditions for the NN, and the Gaussian noise is reverted to the clean sEMG segment. Experimental results show that SDEMG outperforms the previous FCN-based denoising method in signal quality, providing a refined ECG removal approach for clinical sEMG applications.
|
Several single-channel ECG removal methods have been developed in previous studies, including HP and TS [7, 10]. HP removes the frequency band of the ECG, inevitably leading to the loss of the low-frequency part of the sEMG signal. In contrast, TS removes ECG artifacts in the time domain. It extracts ECG templates for subtraction by either filtering or waveform averaging [7, 21], and the ECG artifacts are subtracted from the contaminated sEMG waveform. However, the effectiveness of TS relies on the assumption that sEMG signals are zero-mean Gaussian distributions, which may not be satisfied in real-world scenarios. This study applies both HP and TS to serve as comparative sEMG denoising methods. HP is implemented with a cutoff frequency of 40 Hz, and the TS method was followed using the HP method for optimal results.
|
Fig. 4 presents an example of ECG contamination removal using SDEMG. It can be observed that the ECG artifacts in the noisy sEMG (SNR=-8 dB) are eliminated in the denoised waveform, and the sEMG waveform exhibits minimal distortion when compared to the clean sEMG. This underscores the capability of SDEMG to provide high-quality sEMG signals. One challenge of SDEMG is its relatively high computational effort for optimal performance. This issue may be addressed by involving ODE solvers or applying parameters pruning and quantization.
|
sEMG and ECG signals have frequency bands between 10 to 500 Hz and 0 to 100 Hz, respectively [9]. The overlapping frequency bands pose difficulties in segregating the two signals. To address this issue, several single-channel ECG removal methods have been developed, such as high-pass filters (HP) and template subtraction (TS) [7, 10]. However, HP causes distortion by removing the low-frequency part of sEMG signals, and TS relies on the assumption that ECG is quasi-periodic and sEMG follows a zero-mean Gaussian distribution, which may not hold in real-world scenarios. These limitations make these ECG removal methods struggle under demanding conditions, such as low signal-to-noise (SNR) ratios.
|
B
|
Motivated by the success of self-attention mechanisms from natural language processing [26], ViT was the first to utilize a pure multi-head self-attention mechanism for the image recognition task with the state-of-the-art performance [5]. This showcase its promising capabilities in modeling long-range dependencies. Techniques like shift windows have further tailored ViT, resulting in Swin-Transformer [18], which enhances their applicability in dense prediction tasks in computer vision, such as image segmentation, and detection [19, 31, 17]. In medical image segmentation, the integration of ViT with U-Net architectures, inspired by traditional CNN designs, has also led to various hybrid and pure ViT-based U-Nets. For instance, TransUNet is the first work to harness the feature learning power of ViT in the encoders of UNet [4]. UNETR combines ViT with UNet for 3D segmentation [9], while Swin-UNet and DCSUnet further explore purely Swin Vision Transformer network blocks with U-Net-based structure [3, 28].
|
While Transformers excel in capturing long-range dependencies, their high computational cost, due to the quadratic scaling of the self-attention mechanism with input size, poses a challenge, particularly for high-resolution biomedical images [32, 21]. Recent developments in State Space Models (SSMs) [6, 22, 27], especially Structured SSMs (S4) [8], offer a promising solution with their efficient performance in processing long sequences. The Mamba model enhances S4 with a selective mechanism and hardware optimization, showing superior performance in dense data domains [7]. The introduction of the Cross-Scan Module (CSM) in the Visual State Space Model (VMamba) further enhances Mamba’s applicability to computer vision tasks by enabling the traversal of the spatial domain and converting non-causal visual images into ordered patch sequences [16]. Inspired by these capabilities, we propose leveraging Visual Mamba blocks (VSS) within the U-Net architecture to improve long-range dependency modeling in medical image analysis, resulting in Mamba-UNet. The evolution of U-Net with various network blocks and the positioning of our proposed Mamba-UNet are briefly illustrated in Figure 1.
|
Figure 1: A brief introduction of the evolution of recent developments of UNet with incorporation of Transformer and State Space Models (SSM) for medical image segmentation.
|
In this paper, we introduced Mamba-UNet, which is a purely Visual Mamba block-based UNet style network for medical image segmentation. The performance demonstrates that Mamba-UNet superior performance against classical similar network such as UNet and Swin-UNet. In the future, we aim to conduct more in-depth explorations on more medical image segmentation tasks from different modalities and targets, with comparisons to more segmentation backbones. Besides, we aim to extend Mamba-UNet to 3D medical images, and semi/weakly-supervised learning [14] to further enhance the developments in medical imaging.
|
Motivated by the success of self-attention mechanisms from natural language processing [26], ViT was the first to utilize a pure multi-head self-attention mechanism for the image recognition task with the state-of-the-art performance [5]. This showcase its promising capabilities in modeling long-range dependencies. Techniques like shift windows have further tailored ViT, resulting in Swin-Transformer [18], which enhances their applicability in dense prediction tasks in computer vision, such as image segmentation, and detection [19, 31, 17]. In medical image segmentation, the integration of ViT with U-Net architectures, inspired by traditional CNN designs, has also led to various hybrid and pure ViT-based U-Nets. For instance, TransUNet is the first work to harness the feature learning power of ViT in the encoders of UNet [4]. UNETR combines ViT with UNet for 3D segmentation [9], while Swin-UNet and DCSUnet further explore purely Swin Vision Transformer network blocks with U-Net-based structure [3, 28].
|
A
|
An important question now is whether the strain generated by the gate voltage can be non-volatile. This will allow the reconfiguration to be non-volatile as well. There are many reports of non-volatile remanent strain in piezoelectrics at room temperature [8, 9, 10, 11, 12, 13, 14] although the strain’s longevity has not been studied. If the strain remains non-volatile, we can reconfigure a BSN to an ASN and the reconfiguration will survive subsequent removal of the gate voltage. To revert the ASN back to a BSN, we can simply apply strain of the opposite sign, which will raise the energy barrier back in the nanomagnet and convert the ASN to a BSN.
|
3 Landau-Lifshitz-Gilbert simulations to study random magnetization dynamics in an LBM under different strains
|
Fig. 4 shows the time variations of the normalized magnetization component along the major axis of the nanomagnet, i.e. mysubscript𝑚𝑦m_{y}italic_m start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT (which is also cosθ𝜃\thetaitalic_θ) under different tensile stress. The magnetization is normalized to the saturation magnetization. Clearly under no stress, the behavior is that of a BSN where the magnetization fluctuates rail to rail and is mostly in the state +1 or -1, and not in any intermediate state. As we increase the stress (and depress the energy barrier), the behavior gradually transitions to that of an ASN wherein the magnetization visits all states between -1 and +1 with almost equal likelihood.
|
The LBM is usually a nanomagnet with in-plane anisotropy that is shaped like an elliptical disk with small (but non-zero) eccentricity. The in-plane potential energy profile (energy versus magnetization orientation) of such a LBM is shown schematically in Fig. 1(a). Normally, there is a clear double-well feature which can be discerned despite the low potential barrier. The two ground states (or wells) correspond to the magnetization pointing along either direction along the major axis (or easy axis) of the elliptical nanomagnet. At room temperature, thermal energy can allow the magnetization to transcend the energy barrier separating the wells, which will allow the magnetization to fluctuate randomly between the two potential wells. If we take a snapshot in time, we will usually find the magnetization in one of the two wells, i.e., it will tend to point along one of the two directions along the major axis, which encode the bits +1 and -1. This leads to the digital or “binary” behavior.
|
We carried out Landau-Lifshitz-Gilbert (LLG) simulations of the magnetization dynamics in an LBM at room temperature under different strains to see how the magnetization fluctuation behaves. The LBM we studied is an elliptical Co nanomagnet of major axis 100 nm, minor axis 99 nm and thickness 5 nm. A nanomagnet of these dimensions are likely to be monodomain and hence the macrospin approximation holds. The saturation magnetization Mssubscript𝑀𝑠M_{s}italic_M start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = 106 A/m, the magnetostriction coefficient λssubscript𝜆𝑠\lambda_{s}italic_λ start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = -35 ppm and the Gilbert damping coefficient α𝛼\alphaitalic_α = 0.01 correspond to a Co nanomagnet. The coupled LLG equations governing the temporal evolutions of the scalar components of the magnetization were solved with finite difference method [15, 16] with a time step of 0.1 ps. We assumed positive (tensile) uniaxial strain applied along the major axis of the nanomagnet since Co has negative magnetostriction. This will depress the energy barrier within the nanomagnet. The initial condition was that the magnetization was aligned close to the major axis of the nanomagnet.
|
A
|
(Green,, 2016), we see huge potential for improvement, however requiring sensor based insight into the material properties.
|
Since the network receives the time difference between the moment in time at which the images are recorded and the time at which the properties of the fresh concrete are to be predicted, the network implicitly learns how the properties of the concrete change over time. This can be used to not only predict the properties at a certain point in time, but also continuously over the entire fresh concrete age. In Fig. 3 examples are shown how such a continuous prediction of the slump flow diameter over a time interval looks like. Note, that at this stage of the research only concretes which exhibit a more or less pronounced decrease in consistency over time were investigated. The model is thus only trained to identify and quantify this specific behaviour, which can be traced back to the type of chemical admixtures used in the project. Changing the admixture as to yield a steady or even an increase in flow over time will be studied in future and will certainly require an adaption of the model or at least its training.
|
The ReCyCONtrol111https://www.recycontrol.uni-hannover.de/en/ research project addresses this lack of digitization and automation in the concrete sector.
|
One part of the project focusses on the prediction of the fresh concrete properties. Since the moment of production, i.e. during the mixing process, offers the most opportunities of adjusting the concrete properties in case of quality deviations, the prediction of these properties should be done during the mixing process. Also, as the properties of the concrete may further change between the mixing process and its placement, due to the cements chemical hydration process, the behaviour of the properties after mixing must be modeled over time. We therefore formulate the goal of predicting the future properties of the concrete, e.g. for the time of placement, already during the production step. If deviations to the target properties at the time of placement are estimated in this way, countermeasures in the form of chemical additives can be used to change the properties to reach the desired values.
|
This work is supported by the Federal Ministry of Education and Research of Germany (BMBF) as part of the research project ReCyControl [Project number 0336260A], https://www.recycontrol.uni-hannover.de/en/ and by the LUIS computing cluster funded by the German Research Foundation (DFG) - INST 187/742-1 FUGG.
|
B
|
In this work, we introduce a framework called Re-Diffinet, for modeling discrepancy between the outputs of a segmentation model like U-Net and the ground truth, using the advantages of Denoising Diffusion Probabilistic Models and U-Net model. By explicitly modeling the discrepancy, we intend to build upon previous segmentation models, force diffusion models to focus explicitly on the regions that other models miss, and exploit diffusion models’ ability to capture finer details and variability in the data.
|
The treatment for glioma patients generally consists of surgery, radiotherapy, and chemotherapy and the outcomes of patients with gliomas vary widely according to the glioma type and prognostic factors. Due to the superior soft tissue contrast, multimodal MRI images which allow the complexity and the heterogeneity of the tumor lesion to be better visualized than a CT scan have become the golden standard for surgical decision-making for glioma patients [hanif2017glioblastoma, keunen2014multimodal, van2019perfusion]. However, visual identification of tumor margins in CT or MRI still remains a challenge for neurosurgeons and researchers [wang2019advance]. Clinically, brain tumor masks are often obtained through Magnetic Resonance Imaging (MRI) scans, which require experienced radiologists to manually segment tumor sub-regions [baid2021rsnaasnrmiccai]. This is a long process that is unscalable to the needs of all patients. Thus, the recent growth of machine learning technologies holds promise to provide a reliable and automated solution to segmentation to save time and help medical professionals with this process [Luu2022].
|
The training dataset provided for the BraTS23 challenge [baid2021rsna] consists of 1251 brain MRI scans along with segmentation annotations of tumorous regions. The 3D volumes were skull-stripped and resampled to 1 mm3mm^3 isotropic resolution, with dimensions of (240, 240, 155) voxels. For each example, four modalities were given: native (T1), post-contrast T1-weighted (T1Gd), T2-weighted (T2), and T2 Fluid Attenuated Inversion Recovery (T2-FLAIR). Segmentation labels were annotated manually by one to four experts. Annotations consist of three disjoint classes: enhancing tumor (ET), peritumoral edematous tissue (ED), and necrotic tumor core (NCR). To get the ground truth labels for these datasets, all imaging volumes have then been segmented using the STAPLE [warfield2004simultaneous] fusion of previous top-ranked BraTS algorithms, such as nnU-Net [isensee2021nnu]. These segmented labels were then refined manually by volunteer neuroradiology experts following a consistently communicated annotation protocol. The manually refined annotations were finally approved by experienced board-certified attending neuro-radiologists.
|
Deep learning techniques have been widely used in brain tumor segmentation. U-Net is the state of art for tumor segmentation. U-Net and its variants have been used in brain tumor segmentation. such as U-Net++ [UNetPP], 3D U-Net [3DUNet], V-Net [VNet], and Attention-U-Net [AttUNet]. Transformer architectures has also been applied in brain tumor segmentation. TransU-Net and Swin-U-Net show potential to predict accurate tumor margins. However, the state-of-the-art models in brain tumor segmentation are still based on the encoder-decoder architectures such as U-Net [isensee2021nnu] and its variations. For instance, Luu et. al [Luu2022] modified the nnU-Net model by adding an axial attention in the decoder. Futrega et. al [futrega2021optimized] optimized the U-Net model by adding foreground voxels to the input data, increasing the encoder depth and convolutional filters. Siddiquee et. al [siddiquee2021redundancy] applied adaptive ensembling to minimize redundancy under perturbations.
|
The model was trained on overlapping regions, whole tumor (WT), tumor core (TC), and enhancing tumor(ET). TC entails the ET, as well as the necrotic (NCR) parts of the tumor, and WT describes the complete extent of the disease. The diffusion models was trained using a compound loss function including DICE loss, Binary cross entropy (BCE) loss, and Mean square error(MSE) loss. The model was trained using the AdamW optimizer with a learning rate of 0.0001 and a weight decay equal to 0.0001. The network’s performance was evaluated using 5-fold cross-validation. The data were randomly shuffled and equally split into 5 groups for cross-validation.
|
B
|
Calvo-Zaragoza, J., Rizo, D.: End-to-end neural optical music recognition of monophonic scores. Applied Sciences 8(4), 606 (2018)
|
Calvo-Zaragoza, J., Toselli, A.H., Vidal, E.: Handwritten music recognition for mensural notation with convolutional recurrent neural networks. Pattern Recognition Letters 128, 115–121 (2019)
|
Calvo-Zaragoza, J., Toselli, A.H., Vidal, E.: Handwritten music recognition for mensural notation with convolutional recurrent neural networks. Pattern Recognition Letters 128, 115–121 (2019)
|
Calvo-Zaragoza, J., Toselli, A.H., Vidal, E.: Handwritten music recognition for mensural notation with convolutional recurrent neural networks. Pattern Recognition Letters 128, 115–121 (2019)
|
Calvo-Zaragoza, J., Toselli, A.H., Vidal, E.: Handwritten music recognition for mensural notation with convolutional recurrent neural networks. Pattern Recognition Letters 128, 115–121 (2019)
|
A
|
In this context, xt+1subscript𝑥𝑡1x_{t+1}italic_x start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT and utsubscript𝑢𝑡u_{t}italic_u start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is the state and action, 𝐱𝐠subscript𝐱𝐠\mathbf{x_{g}}bold_x start_POSTSUBSCRIPT bold_g end_POSTSUBSCRIPT represents the desired state value and is a constant conference. 𝐋,𝐌,𝐍𝐋𝐌𝐍\mathbf{L},\mathbf{M},\mathbf{N}bold_L , bold_M , bold_N are weight matrices. These matrices are used to fine-tune the relative impacts of the state, action vector, and rate of state changes on the overall cost. Among these, the rate of state changes considers factors such as comfort in specific control problems. For instance, in the context of a moving car, if the acceleration change rate jerk𝑗𝑒𝑟𝑘jerkitalic_j italic_e italic_r italic_k is too high, it will make passenger uncomfortable.
|
In control engineering, RL is adept at uncovering optimal control laws. Through iterative interactions with the environment, optimal control strategies can be identified, relying instead on a trial-and-error approach. Reinforcement Learning is conceptualized as a Markov Decision Process (MDP), where the agent (controller) observes the state of the environment, takes actions, receives rewards, and refines its policy based on feedback. Through learning and optimization, reinforcement learning progressively unveils the optimal control law, empowering the system to attain optimal performance in specific tasks or goals[5]. The reward function in reinforcement learning is akin to the cost function in control theory, defining the agent’s objectives in the environment and exerting direct influence over the algorithm’s performance and convergence speed[6].
|
It can be used for iterative solutions to obtain the action-value function Qπ(s,a)superscript𝑄𝜋𝑠𝑎Q^{\pi}(s,a)italic_Q start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_s , italic_a ). Ultimately, the core objective of reinforcement learning is to determine an optimal policy
|
In contrast to the cost function in optimal control, where the objective is to minimize a cost function, reinforcement learning strives to maximize expected returns.
|
Optimal control theory is centered on crafting control policies to attain optimal performance for a given system based on a specific performance metric. This metric is commonly expressed as a cost or objective function, and the objective of optimal control is to discover a control policy that minimizes or maximizes this performance metric[4].
|
C
|
Colposcopy, CT (Computed Tomography), Digital Photography, Fundus Photography, Infrared Reflectance Imaging, MR (Magnetic Resonance Imaging), OCT (Optical Coherence Tomography), Dermoscopy, Endoscopy, Microscopy Images, X-Ray, Ultrasound
|
As introduced in Sec 3, we generate a set of incorrect options for each item, which are utilized to construct multiple-choice question-answer pairs. The number of candidate options of each question ranges from 2 to 4. In Fig. 7, we illustrate the QA items with different number of options. As depicted, questions with two options are “Yes/No” selection. On the other hand, questions with three options predominantly focus on Lesion Grading, which judges the severity of the disease.
|
Figure 1: Left: Overview of our OmniMedVQA dataset. OmniMedVQA covers the majority of radiologic modalities and anatomical regions of the human body, such as the brain, eyes, oral cavity, chest, breast, abdomen, upper limb, lower limb, feet, etc. Right: Illustrations of samples from five different question types.
|
Although medical LVLMs exhibit lower accuracy when considering the overall dataset, they tend to perform well in modalities characterized by substantial differences from general images, such as CT and MRI. However, in modalities with similar distributions to those in general domain images, medical-specialized LVLMs fail to demonstrate notably superior performance.
|
Lung, Mammary Gland, Lung, Hand, Upper Limb, Eye, Uterus, Intestine, Skin, Shoulder, Kidney, Gallbladder, Pancreas, Spleen, Liver, Pelvic, Ovary, Blood Vessel, Spine, Urinary System, Adipose Tissue, Muscle Tissue, Oral Cavity, Knee, Foot, Lower Limb
|
D
|
One primary application of logic in informatics is for representing, understanding, and reasoning about systems; this determines the field of logical systems modelling. In this context, the ‘modelling’ is used both in the general and mathematical sense. The goal is to utilize logic to represent, analyze, and simulate systems by interpreting logical structures and relationships in terms of concepts relevant to the model in question. We discuss several examples below.
|
firstly, while both readings are useful, they are individually limited in the context of systems modelling: sharing/separation expresses the structure of distributed systems and number-of-uses expresses the dynamics of the resources involved
|
The paradigm of base-extension semantics provides an inferentialist account of resource semantics that uniformly encompasses both the number-of-uses readings — as found in the family of linear logics — and the sharing/separation semantics — as found in bunched logics, such as BI and relevance logics.
|
In the field logical systems modelling, substructural logics are useful because of their resource interpretations. The study of such interpretations of logics, especially in the context of systems modelling, is called resource semantics — see Section 3.
|
One primary application of logic in informatics is for representing, understanding, and reasoning about systems; this determines the field of logical systems modelling. In this context, the ‘modelling’ is used both in the general and mathematical sense. The goal is to utilize logic to represent, analyze, and simulate systems by interpreting logical structures and relationships in terms of concepts relevant to the model in question. We discuss several examples below.
|
C
|
The generation of the input-output pairs for the DNNs is designed. Moreover, the training data is enriched through augmentation to enhance the estimation performance for both S&C channels.
|
Numerical results have shown that under different SNR conditions, the proposed approach possesses superior generalization ability and significantly improves the NMSE performance compared to the benchmark scheme.
|
For the estimation performance of 𝐀𝐀{\mathbf{A}}bold_A in Fig. 6(a), the NMSE of the proposed approach decreases as M𝑀Mitalic_M increases and outperforms the benchmark scheme under different SNR conditions.
|
Numerical results demonstrate the substantial improvements achieved by the proposed approach over the benchmark scheme under various signal-to-noise ratio (SNR) conditions and system parameters.
|
Furthermore, the proposed approach has been evaluated under a wide range of channel dimensions and results revealed a considerable NMSE performance improvement over the benchmark scheme.
|
C
|
Particularly, the LoS components of the uplink channels (i.e., 𝐛ksubscript𝐛𝑘{\mathbf{b}}_{k}bold_b start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT and 𝐠ksubscript𝐠𝑘{\mathbf{g}}_{k}bold_g start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT) are respectively given by 𝐛k,LoS=𝐚(ϑUkB)subscript𝐛𝑘LoS𝐚subscriptitalic-ϑsubscript𝑈𝑘B{\mathbf{b}}_{k,{\mathrm{LoS}}}={\mathbf{a}}(\vartheta_{U_{k}{\mathrm{B}}})bold_b start_POSTSUBSCRIPT italic_k , roman_LoS end_POSTSUBSCRIPT = bold_a ( italic_ϑ start_POSTSUBSCRIPT italic_U start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT roman_B end_POSTSUBSCRIPT ) with AoA ϑUkBsubscriptitalic-ϑsubscript𝑈𝑘B\vartheta_{U_{k}{\mathrm{B}}}italic_ϑ start_POSTSUBSCRIPT italic_U start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT roman_B end_POSTSUBSCRIPT and 𝐠k,LoS=𝐚(ϑUkI)subscript𝐠𝑘LoS𝐚subscriptitalic-ϑsubscript𝑈𝑘I{\mathbf{g}}_{k,{\mathrm{LoS}}}={\mathbf{a}}(\vartheta_{U_{k}{\mathrm{I}}})bold_g start_POSTSUBSCRIPT italic_k , roman_LoS end_POSTSUBSCRIPT = bold_a ( italic_ϑ start_POSTSUBSCRIPT italic_U start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT roman_I end_POSTSUBSCRIPT ) with AoA ϑUkIsubscriptitalic-ϑsubscript𝑈𝑘I\vartheta_{U_{k}{\mathrm{I}}}italic_ϑ start_POSTSUBSCRIPT italic_U start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT roman_I end_POSTSUBSCRIPT, while that of the downlink channels (i.e., 𝐝jsubscript𝐝𝑗{\mathbf{d}}_{j}bold_d start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT and 𝐟jsubscript𝐟𝑗{\mathbf{f}}_{j}bold_f start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT) are respectively denoted by 𝐝j,LoS=𝐚(ϑBDj)subscript𝐝𝑗LoS𝐚subscriptitalic-ϑBsubscript𝐷𝑗{\mathbf{d}}_{j,{\mathrm{LoS}}}={\mathbf{a}}(\vartheta_{{\mathrm{B}}D_{j}})bold_d start_POSTSUBSCRIPT italic_j , roman_LoS end_POSTSUBSCRIPT = bold_a ( italic_ϑ start_POSTSUBSCRIPT roman_B italic_D start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) with AoD ϑBDjsubscriptitalic-ϑBsubscript𝐷𝑗\vartheta_{{\mathrm{B}}D_{j}}italic_ϑ start_POSTSUBSCRIPT roman_B italic_D start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT and 𝐟j,LoS=𝐚(ϑIDj)subscript𝐟𝑗LoS𝐚subscriptitalic-ϑIsubscript𝐷𝑗{\mathbf{f}}_{j,{\mathrm{LoS}}}={\mathbf{a}}(\vartheta_{{\mathrm{I}}D_{j}})bold_f start_POSTSUBSCRIPT italic_j , roman_LoS end_POSTSUBSCRIPT = bold_a ( italic_ϑ start_POSTSUBSCRIPT roman_I italic_D start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) with AoD ϑIDjsubscriptitalic-ϑIsubscript𝐷𝑗\vartheta_{{\mathrm{I}}D_{j}}italic_ϑ start_POSTSUBSCRIPT roman_I italic_D start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT.
|
In such a case, the designed ELM is only required to update the parameter Θdouble-struck-Θ\mathbb{\Theta}blackboard_Θ by minimizing the loss function as
|
Furthermore, the normalized mean square error (NMSE) is employed as an estimation performance metric, and is denoted by
|
For the sensing channel, the path loss at distance dSsubscript𝑑Sd_{\mathrm{S}}italic_d start_POSTSUBSCRIPT roman_S end_POSTSUBSCRIPT is modeled as ξS=ξ0(dSd0)−βSsubscript𝜉Ssubscript𝜉0superscriptsubscript𝑑Ssubscript𝑑0subscript𝛽S\xi_{\mathrm{S}}=\xi_{0}(\frac{d_{\mathrm{S}}}{d_{0}})^{-\beta_{\mathrm{S}}}italic_ξ start_POSTSUBSCRIPT roman_S end_POSTSUBSCRIPT = italic_ξ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( divide start_ARG italic_d start_POSTSUBSCRIPT roman_S end_POSTSUBSCRIPT end_ARG start_ARG italic_d start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_ARG ) start_POSTSUPERSCRIPT - italic_β start_POSTSUBSCRIPT roman_S end_POSTSUBSCRIPT end_POSTSUPERSCRIPT, where ξ0=−30dBmsubscript𝜉030dBm\xi_{0}=-30\,\rm dBmitalic_ξ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = - 30 roman_dBm represents the path loss at the reference distance d0=1msubscript𝑑01md_{0}=1\,\rm mitalic_d start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 1 roman_m. The path losses of the communication channels (i.e., ξIBsubscript𝜉IB\xi_{{\mathrm{I}}{\mathrm{B}}}italic_ξ start_POSTSUBSCRIPT roman_IB end_POSTSUBSCRIPT, ξUkBsubscript𝜉subscript𝑈𝑘B\xi_{U_{k}{\mathrm{B}}}italic_ξ start_POSTSUBSCRIPT italic_U start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT roman_B end_POSTSUBSCRIPT, ξUkIsubscript𝜉subscript𝑈𝑘I\xi_{U_{k}{\mathrm{I}}}italic_ξ start_POSTSUBSCRIPT italic_U start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT roman_I end_POSTSUBSCRIPT, ξBDjsubscript𝜉Bsubscript𝐷𝑗\xi_{{\mathrm{B}}D_{j}}italic_ξ start_POSTSUBSCRIPT roman_B italic_D start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT, and ξIDjsubscript𝜉Isubscript𝐷𝑗\xi_{{\mathrm{I}}D_{j}}italic_ξ start_POSTSUBSCRIPT roman_I italic_D start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT) are formulated similarly as the sensing one.
|
Furthermore, to describe the path loss factor of each channel, a distance-dependent path loss model is employed.
|
D
|
It involves the design of the pilot sequences adopted at the FD ISAC BS, pilot sequences employed at the UE, and IRS phase-shift vectors.
|
On the basis of the proposed three-stage estimation approach and the designed input-output pairs, a CNN-based DL framework is proposed.
|
Since the propagation environment of the reflected SAC channels is more complicated, the hidden layers in the RE-CNN increase to two CLs and two FFLs to promote its feature extraction ability.
|
Considering the different propagation environments of the direct and reflected channels, two CNN architectures are carefully designed to form the proposed DL framework.
|
The proposed DL framework realized by the DE-CNN and RE-CNN has been developed to successively estimate the direct SAC channels, reflected communication channel, and reflected sensing channel.
|
C
|
TABLE I: . FLOPs (×107absentsuperscript107\times 10^{7}× 10 start_POSTSUPERSCRIPT 7 end_POSTSUPERSCRIPT), runtime (ms), and BER (×10−3absentsuperscript103\times 10^{-3}× 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT) comparisons (16-QAM, SNR=20SNR20\text{SNR}=20SNR = 20 dB)444The FLOPs and runtime decomposition are provided in Appendix C..
|
In particular, the graph neural network (GNN)-based detector for OTFS modulation [7], which builds upon a pair-wise Markov random field (MRF) to leverage structural system information, outperforms the MP-based and AMP-based detectors by notable margins. Nonetheless, since the GNN parameters are learned from training data, prior information of transmitted symbols cannot be best utilized, leaving a gap for improvement [8, 9, 10]. Also, without dedicated optimization, the inter-Doppler interference (IDI) symbols in OTFS systems result in a complex pair-wise MRF and bring heavy computation to the GNN-based detector. Noticed that AMP [11] is powerful in removing inter-symbol interference by progressively refining prior information of transmitted symbols, an AMP-GNN network was developed in [10] for massive multiple-input multiple-output detection. Although the AMP-GNN network enjoys the benefits of both AMP and GNN, when being myopically applied for OTFS data detection, the computational complexity remains an obstacle.
|
In this letter, we propose a novel AMP-GNN-based detector for OTFS modulation, where an AMP and a GNN module collaborate to enhance data detection accuracy by exchanging intermediate estimation results. To achieve cost-effective data detection, a learning-based IDI approximation scheme is developed, which simplifies the pair-wise MRF of OTFS systems that largely determines the complexity of the GNN module. Simulation results show that the proposed detector outperforms the baselines by 41.4∼similar-to\sim∼80.7% in bit error rate (BER)
|
This letter developed an AMP-GNN-based data detector for OTFS modulation, which exploits prior information obtained from an AMP module to improve the data symbol estimates of a GNN module in an iterative manner. To reduce the computational complexity, learning-based approximation for inter-Doppler interference was further proposed. Simulation results validated the advantages of the proposed AMP-GNN-based detector over existing baselines. For future research, it will be interesting to replace AMP with more robust Bayesian optimization algorithms and extend this study for joint channel estimation-data decoding in OTFS systems.
|
In this section, we develop an AMP-GNN-based detector for OTFS systems, which alternates between an AMP module and a GNN module over T𝑇Titalic_T iterations as depicted in Fig. 1. A real-valued approximate signal model is first developed for ease of implementation, followed by operations of the AMP module and the GNN module. To further reduce the detection overhead, an IDI approximation scheme is also proposed.
|
C
|
It is crucial to recognize the impact of confounding variables, especially in a small dataset. Key factors such as age, gender, stroke severity and type, medical comorbidities, medications, cognitive function, psychological factors, rehabilitation history, time since stroke, environmental factors, and nutritional status have an effect on patient performance. Stratifying different populations based on these factors would be an important next step.
|
The performance of the AST model in “Mel mono” was notably effective, with an AUC of 0.83 and ST of 0.89, but its SP of 0.60 suggests a significant trade-off, with a higher tendency for false positives. According to Gong et al. (2021a), the AST model does not require as many epochs to train as the CNN-attention hybrid models, which need significantly more epochs. It is worth noting that the AST model required only 6 epochs of training on our dataset to achieve these metrics, which is fewer compared to the CNN-attention hybrid models and other Transformer models explored in this study that needed significantly more epochs to train.
|
Our study examines the associations between spectrogram preprocessing techniques and the ensuing performance of audio classification models, underscoring an important consideration for clinical applications: the nuanced efficacy of preprocessing approaches has a significant bearing on leveraging transfer learning. Our works suggests that while RGB preprocessing exhibits superior performance in conjunction with ImageNet pre-training, the “Mel mono” approach, when pre-trained on expansive public audio datasets, surpasses RGB’s effectiveness. This insight is crucial, suggesting that in clinical settings, where data limitations and intrinsic differences are prevalent, adopting a more standardized and contextually tailored approach to preprocessing could significantly enhance the performance of deep learning models. Moreover, the observed variances in model architecture performance, particularly the robustness of transformer-based models versus traditional CNNs in handling limited training epochs, offer a promising avenue for refining audio classification frameworks. This suggests that through strategic selection of preprocessing techniques and models there may be more optimal audio classification strategies that can improve diagnostics in with heightened accuracy and efficiency in clinical environments. This has implications for voice as a biomarker in stroke and other neurologic conditions in addition to other disease states where data limitations may be intrinsic to the health condition including rare diseases.
|
In this section, we explore various neural network architectures, including ConvNeXt and DenseNet for CNN-based models, ConvLSTM2D for temporal data analysis, and Vision Transformer (ViT) and SWIN Transformer for transformer-based models. Additionally, we introduce pre-trained audio feature extractors such as YAMNet, VGGish, and Trill. To address classification tasks, we employ different loss functions and optimizers. For CNN-based models, particularly DenseNet, we implement a hybrid loss function that combines Cross-Entropy and Contrastive Loss. We also incorporate class weights to handle dataset imbalances. Transformer-based models are trained using Cross-Entropy Loss with the inclusion of class weights. The Adam optimizer is chosen for its adaptive learning rate capabilities. Our preprocessing methods involve the use of grayscale audio spectrograms and the conversion of spectrograms into RGB images for select models. Additionally, we explore the use of Superlet transforms in pre-processing. Finally, we evaluate our classifiers using per-participant prediction aggregation (Majority Voting).
|
In summary, our study underscores the effectiveness of modern CNN architectures, such as DenseNet and ConvNeXt, in the field of clinical audio classification. These architectures demonstrate robustness, often rivaling or even surpassing the capabilities of transformer models, particularly in scenarios involving small datasets. A key factor in this success is the strategic use of open-source pre-trained weights, which not only accelerates the development process but also significantly enhances model accuracy.
|
D
|
Although MA-TISK is a non-linear modulation, it can be demodulated like a linear one thanks to its combination with repetition coding and differential precoding.
|
In contrast to binary GMSK discussed in Section II, a modification of the phase mapping is not just a frequency shift and/or mirroring, when a repetition code is used. For quaternary phase constellations, it makes a difference, whether the repeated phase shift, i.e. the one that the symbol s=0𝑠0s=0italic_s = 0 is mapped to, is at the edge of the constellation, i.e. has minimum or maximum phase shift, or it is an inner phase shift between these extremes.
|
In that case, we can sample the received signal, window it for N𝑁Nitalic_N symbol periods and perfectly separate the N𝑁Nitalic_N subcarriers via FFT. In practice, some crosstalk may remain. The frequency pulse may not have fully decayed or channel dispersion may jeopardize the orthogonality. These effects can be counteracted by slightly reducing the rate R𝑅Ritalic_R, as demonstrated in Section IV.
|
The transition phase may introduce some minor overhead. However, the overheads of competing systems, e.g., roll-off factors or blank subcarriers, are avoided.
|
Though one minor issue has remained: The frequencies of the subcarriers have shifted. Therefore, their orthogonality is lost. This, however, is easy to fix, as explained in the sequel.
|
C
|
As shown in Table 3, the experimental results indicate that different discrete codec representations do not exhibit significant differences in terms of the Word Error Rate (WER) metric. However, for the Speaker Similarity (SPK) metric, we observe that the codecs extracted by the Language-Codec model perform better on downstream models. By merely replacing the codec representation, the average speaker similarity increases by 10%percent\%%. Additionally, in subjective Mean Opinion Score (MOS) evaluations, we discover that the codec representations extracted by the Language-Codec model exhibit certain improvements in terms of audio quality and audio similarity compared to those extracted by the encoder model. However, no significant differences are observed in terms of prosodic representations.
|
We conducted ablation tests on the ConvNeXt Blocks and Fourier Transform structure of the decoder to evaluate its impact on the codecs reconstruction. Specifically, we employed the pre-trained encoder module, the quantizer module from Encodec and the pre-trained decoder module from Vocos for inference. In Tables 1, we refer to this model as Vocos. By comparing the Encodec model and the Vocos model on the LibriTTS Test-Clean and Test-Other datasets, we observed that replacing the decoder with the Vocos structure significantly improves the audio quality in the four-channel setting. However, in the eight-channel setting, the F1 and SPK scores of the Vocos model were slightly lower than those of the Encodec model, although the UTMOS, PESQ, and STOI scores were higher. Subjectively, the Vocos structure effectively mitigates the artifacts introduced by the Encodec model. Furthermore, when comparing the Vocos model with the Language-Codec model, we found that the Language-Codec model outperforms the Vocos model significantly. This observation suggests that the MCRVQ mechanism, which normalizes the information in the quantizer, further enhances the audio reconstruction quality.
|
Based on the observations from Table 1, the following conclusions can be drawn: 1) Regarding the audio reconstruction of the four-channel codecs, the Language-Codec model significantly outperforms all baseline models in terms of objective metrics. While there is a slight decrease in audio reconstruction quality when the number of channels is reduced from eight to four in the baseline models, the Language-Codec model maintains a consistently good reconstruction performance. Additionally, it is noteworthy that the four-channel reconstruction of Language-Codec even surpasses the eight-channel performance of several baseline models. For instance, in terms of the PESQ and STOI metrics, the four-channel Language-Codec model outperforms the eight-channel SpeechTokenizer model by 0.03 and 0.5 in LibriTTS Test-Clean sets. Furthermore, in the UTMOS metric, the four-channel Language-Codec model significantly outperforms the eight-channel Encodec model. 2) In the eight-channel codecs reconstruction, the Language-Codec model also maintains SOTA reconstruction quality. Although the eight-channel SpeechTokenizer model achieved similar scores to the Language-Codec model in terms of the UTMOS metric, it significantly underperformed in other metrics such as SOTI, SPK, and PESQ compared to the Language-Codec model, and even performed noticeably worse than the Encodec model. Considering the overall auditory perception and average audio quality, the Language-Codec model achieves the best performance. 3) We noticed that all comparative models maintain similar conclusions and trends between the Test-Clean (clean dataset) and Test-Other (noisy dataset) conditions. Moreover, the Language-Codec model demonstrates good reconstruction quality even in noisy environments. 4) It is worth mentioning that the commonly employed Encodec model in downstream tasks consistently performs lower than the Language-Codec model in the UTMOS metric. Upon carefully listening to relevant audio samples, we identified that the Encodec model may introduce more reconstruction artifacts, a characteristic that significantly affects UTMOS scores.
|
Moreover, during the training process of our downstream zero-shot Text-to-Speech model, we find that when the downstream model predicts codecs generated by the Language-Codec model, the accuracy of codec prediction decreases when the number of channels exceeds four. Although this does not have a significant impact on the performance of the downstream model, future endeavors could explore the use of smaller or variable codebooks to further enhance the results.
|
We also validated the role of the Masked Channel Residual Vector Quantization (MCRVQ) module in the language-codec model. Considering that the design purpose of the MCRVQ mechanism is to reduce the difficulty of text generation in downstream tasks, we conducted ablation experiments on the zero-shot TTS model downstream. Specifically, we first replaced the MCRVQ module in the codec model with the RVQ module in the encodec model while keeping the same training steps and other configurations. We refer to this experiment setup as Language-Codec w/o MCRVQ. We used Language-Codec w/o MCRVQ to extract the corresponding discrete codec features and retrained the downstream VALL-E and MobileSpeech models. The experimental results, as shown in Table 4, revealed that there was no significant difference between Language-Codec w/o MCRVQ and Language-Codec in terms of the robustness metric WER. However, in terms of the objective metric of speaker similarity, omitting the MCRVQ module resulted in a decrease of 0.04 and 0.05 similarity in VALL-E and MobileSpeech, respectively, indicating that the MCRVQ module indeed enhances the codec generation capability of the downstream speech synthesis model by weakening the difficulty of text generation for codec. In addition, we also conducted corresponding subjective CMOS tests, from Table 5, it can be observed that in the autoregressive discrete codec modeling experiments of the VALL-E model, the CMOS values of the synthesized audio decreased by 0.13 when the MCRVQ module was omitted compared to the original Language-Codec model. Similarly, in the parallel discrete codec modeling experiments of the MobileSpeech model, the CMOS values of the synthesized audio decreased by 0.17 when the MCRVQ module was omitted compared to the original Language-Codec model, which further indicated that the codec generated by the Language-Codec w/o MCRVQ model had lower subjective audio quality and audio similarity than the codec generated by the Language-Codec model.
|
C
|
In contrast to the above-mentioned recent results, we focus on real-time event identification using PMU data and physics-based modal decomposition methods along with interpretable ML models. Our event identification framework leverages the approach in [7] and involves two steps: (i) extract features using physics-based modal decomposition methods; (ii) use such features to learn logistic regression (LR) and gradient boosting (GB) models for event classification. Our primary goal is to design an algorithmic approach that generates adversarial examples to evaluate the robustness of this physics-based event classification framework. We evaluate our attack algorithm in two distinct settings: white box and gray box. In the white box setup, we assume that the attacker has full knowledge of the classification framework including the classification model (i.e., knows both (i) and (ii) detailed above), and can only tamper with a subset of PMUs. On the other hand, for the gray box setup, we assume that the attacker does not know the ML classifier used by the system operator or the data that was used for training; however, the attacker has knowledge of the aspect (i) of the framework, has access to historical data from the same network, and can tamper with a subset of PMUs. In either setting, the attack algorithm perturbs event features in the direction of the classifier’s gradient until the event is incorrectly classified. Using detailed event-inclusive PSS/E generated synthetic data for the 500-bus South Carolina system, we show that both types of attacks can significantly reduce the accuracy of the event classification framework presented in [7].
|
We use logistic regression (LR) and gradient boosting (GB) classification models as the ML models for the evaluation of the framework and design of adversarial attacks. For LR, classification requires computing the probability of event yisubscript𝑦𝑖y_{i}italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT as
|
We first describe the event identification framework, introduced in [7], and the two classification models we consider.
|
In contrast to the above-mentioned recent results, we focus on real-time event identification using PMU data and physics-based modal decomposition methods along with interpretable ML models. Our event identification framework leverages the approach in [7] and involves two steps: (i) extract features using physics-based modal decomposition methods; (ii) use such features to learn logistic regression (LR) and gradient boosting (GB) models for event classification. Our primary goal is to design an algorithmic approach that generates adversarial examples to evaluate the robustness of this physics-based event classification framework. We evaluate our attack algorithm in two distinct settings: white box and gray box. In the white box setup, we assume that the attacker has full knowledge of the classification framework including the classification model (i.e., knows both (i) and (ii) detailed above), and can only tamper with a subset of PMUs. On the other hand, for the gray box setup, we assume that the attacker does not know the ML classifier used by the system operator or the data that was used for training; however, the attacker has knowledge of the aspect (i) of the framework, has access to historical data from the same network, and can tamper with a subset of PMUs. In either setting, the attack algorithm perturbs event features in the direction of the classifier’s gradient until the event is incorrectly classified. Using detailed event-inclusive PSS/E generated synthetic data for the 500-bus South Carolina system, we show that both types of attacks can significantly reduce the accuracy of the event classification framework presented in [7].
|
In order to evaluate the vulnerability of the event identification framework, we consider two settings: (i) white box; and (ii) gray box. In the white box attack setting, we assume the following: (a) the attacker has full knowledge of the event identification framework, (b) access to all measurements and their corresponding ground truth event label but with restricted ability to only tamper with a subset of PMUs, and (c) knowledge of the ML classifier used by the system operator, including all the parameters of the classifier learned by the operator.
|
B
|
Unlike classical data-level fusion rule, the proposed combination avoids the transmission of the full raw data observations. Instead, each radar pair solely transmits its estimated channel covariance matrix to the central processor.
|
In this section, Monte-Carlo simulations are performed to assess the improvement brought by the proposed fusion method on the accuracy of a multistatic radar system to localize targets in the coverage area. The method proposed in Section III is compared to the following methods:
|
In Section III-B, we made the assumption that the matrix 𝐒^psubscript^𝐒𝑝\hat{\mathbf{S}}_{p}over^ start_ARG bold_S end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT is diagonal to ensure that the proposed method is equivalent to the complete 2K2𝐾2K2 italic_K-dimensional ML estimator. We now discuss the impact of this assumption. Figure 3 displays the impact of the number of subcarriers Qpsubscript𝑄𝑝Q_{p}italic_Q start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT on the accuracy of the presented methods and evaluates its influence on the average diagonality of the matrix 𝐒^psubscript^𝐒𝑝\hat{\mathbf{S}}_{p}over^ start_ARG bold_S end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT using a criterion defined in [11]. The matrix is considered perfectly diagonal when the criterion is equal to 1 and balanced when it is equal to 0. With an increase in the number of subcarriers, the accuracy of the proposed method, of method A, and of the soft fusion increases. This is due to the improved reliability of the MUSIC algorithm outputs when the sample covariance matrix 𝐑psubscript𝐑𝑝\mathbf{R}_{p}bold_R start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT is well estimated. It can be observed that the proposed method remains reliable and more accurate than the other methods even when 𝐒^psubscript^𝐒𝑝\hat{\mathbf{S}}_{p}over^ start_ARG bold_S end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT is not perfectly diagonal.
|
Figure 3: Impact of the number of subcarriers when each radar pair as the same characteristics. The RMSE is compared for the different methods as a function of the number of subcarriers. The impact of the number of subcarriers on the diagonality of 𝐒^psubscript^𝐒𝑝\hat{\mathbf{S}}_{p}over^ start_ARG bold_S end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT is represented by the dashed line.
|
The performance of the proposed combination is compared to other fusion methods and its benefit is evaluated by numerical simulations. The study examines various system parameters, including the number of antennas, the noise variance, and the number of subcarriers, to assess their impact on localization accuracy. The proposed methodology could be expanded in future works to take advantage of the range and Doppler estimations of each radar pair to enhance localization accuracy. The proposed framework can also be extended to any type of multistatic radar that provides noisy channel estimates.
|
D
|
This supports the assumption in Basu et al. (2017) that drivers prefer to experience automated driving in a manner they believe aligns with their own driving style, irrespective of their actual driving style.
|
This aligns with numerous studies, including Wang et al. (2022); Peng et al. (2022); Yusof et al. (2016); Ma and Zhang (2021); Basu et al. (2017); Hartwich et al. (2015); Bellem et al. (2018); Sourelli et al. (2023); Rossner and Bullinger (2020a); Ekman et al. (2019); Dillen et al. (2020), indicating that users generally favor a more passive driving style when being driven by an AV.
|
Moreover, users’ assessment of an AV’s driving style is shaped by a combination of objective and subjective factors Peng et al. (2022).
|
Moreover, there are on-drive evaluations based on verbal feedback or questionnaires, typically initiated through prompts for assessment by a research assistant or an audio signal Ekman et al. (2019); Peng et al. (2022); Vasile et al. (2023).
|
Therefore, when evaluating driving style differences, it is crucial to utilize a motion system in combination with realistic road and traffic models like in Bellem et al. (2018); Hajiseyedjavadi et al. (2022); Peng et al. (2022); Wang et al. (2022); Schrum et al. (2023).
|
B
|
Our model is an extension of VoxelNeXT [2] that takes full advantage of the sparsity of the lidar points. The architecture of MULSPAD is shown in Figure 2 where, for a particular batch, dimensions of feature vectors at various stages of processing are marked.
|
By a process of voxelization, we quantize G0superscript𝐺0G^{0}italic_G start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT into G1superscript𝐺1G^{1}italic_G start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT, and apply a set function q(⋅)𝑞⋅q(\cdot)italic_q ( ⋅ ) to all the points (after a random shuffling and up to a given limit on the number) falling into the same voxel, to produce a single vector of dimension d1subscript𝑑1d_{1}italic_d start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT (which equals d0subscript𝑑0d_{0}italic_d start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT in [4] and [2], and 6d06subscript𝑑06\,d_{0}6 italic_d start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT in our paper for 6 sweeps), so that we have
|
Paired detections occupy a bigger span of space, so to increase the effective receptive field 𝒩−1(⋅)superscript𝒩1⋅\mathcal{N}^{-1}(\cdot)caligraphic_N start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( ⋅ ), we add another stage 𝐗7superscript𝐗7{\bf X}^{7}bold_X start_POSTSUPERSCRIPT 7 end_POSTSUPERSCRIPT in (3). In forming BEV, for vectors falling on the same (x,y)𝑥𝑦(x,y)( italic_x , italic_y ) index, we stack up their sum and their elementwise max to form the feature vector; in VoxelNeXT [2] only the mean is used.
|
To obtain symmetry, we have developed the proof-of-concept model MULSPAD, which is based on VoxelNeXT [2], to produce paired detections for each object, as shown in Figure 1 using Waymo Open Dataset [3]. We use 6 sweeps indexed by [−5,−4,−3,−2,−1,0]543210[-5,-4,-3,-2,-1,0][ - 5 , - 4 , - 3 , - 2 , - 1 , 0 ] where 00 corresponds to “current time” and −55-5- 5 corresponds to “5 sweeps ago.”
|
Since we are using multiple sweeps, we increase the budget for the number of voxels used. For each voxel, since we are using 6 sweeps and therefore have 6 distinct relative time values that can be attached to a lidar vector, we allow up to 30 points in a voxel, and take the mean for each time value. We then stack up the 6 mean vectors to form a feature vector for the voxel.
|
D
|
Overall, through these experiments, we examined the performance of different models in various blind room parameter estimation tasks and assessed their adaptability in handling variable-length audio inputs.
|
In the task of blind room parameter estimation, Dataset I and Dataset II mentioned in sections 3.1 and 3.2 were utilized. In the preprocessing phase, room volume labels (in m3 units, in logarithmic scale) were exclusively read, and four models, CNN-based model, CRNN-based model, the Proposed Method, and the “proposed method w/ pretrain” model, were individually evaluated for their performance on Dataset I and Dataset II. For the blind room parameter estimation with variable-length audio input, Dataset II was employed. Similarly, in the preprocessing phase, only room volume labels (in m3 units, in log-scaled) were considered. However, a modification was made to the test set of Dataset II. Specifically, samples were extracted from 1 to 4 seconds with a step size of 0.5 seconds, and zero padding was applied to different lengths of audio samples to match the original length. This was done to assess the performance of different models in handling blind room parameter estimation under audio inputs with different length.
|
Finally, in the task of joint estimation of room parameters, Dataset II was used. In the preprocessing phase, the model simultaneously reads room RT60 (in seconds) labels and room volume labels (in m3 units). In order to overcome the significant scale differences between these two parameters, we adopted an approach where we mapped the values of RT60 to volume values and applied a logarithmic scaling to them. It is worth emphasizing that this data processing method is reversible, allowing us to revert all parameters to standard units at any time. The advantage of mapping the parameter relationship rather than standard normalization is that it eliminates the need for frequent adjustment of hyperparameters when dealing with different blind room parameter estimation tasks, as it effectively addresses the differences in units and magnitudes among the parameters. This is done to evaluate the performance of different models in joint room parameters estimation.
|
In this task, we selected three models, namely the CNN-based model, CRNN-based model, as well as the “proposed method w/ pretrain” model, and trained them on Dataset II. Their network architectures were fundamentally similar to those used for the “Estimation of room volume parameter” task, with minor modifications. In the “Joint estimation of room parameters” task, the three models are required to output two parameters, i.e. room volume and RT60, instead of a single parameter. Consequently, the final output layers of the models were modified to include two fully connected layers for estimating different room parameters. During the training process, hyperparameters were fine-tuned (as described in Section 5.3), and the loss function was adjusted (as shown in Eq.6).
|
In this section, model performances under variable-length audio inputs are evaluated for the “Room parameter estimation” task. The selected models were tested with different lengths of audio inputs, and their performances were assessed using four objective evaluation metrics as shown in Fig. 8. It is evident from the figure that the accuracy of the models in predicting room volume parameter significantly depends on the length of the input audio. As the input audio length shortens, the estimation performance of all models inevitably experiences degradation.
|
A
|
Satellite communications have garnered interest as a promising solution to achieve ubiquitous connectivity. According to the International Telecommunication Union report [1], nearly half of the global population remains unconnected. Satellites are expected to address this connectivity gap, offering a viable solution that alleviates the cost burden faced by telecom operators when deploying terrestrial base stations [2]. Traditionally, geostationary orbit (GEO) satellites were the primary choice for satellite networks due to the extensive coverage facilitated by their high altitudes. However, the extremely high altitudes of GEO satellites present challenges such as high latency and low area spectral efficiency, making them less suitable for 5G applications. The industry has shifted towards adopting low Earth orbit (LEO) satellites to reduce latency, increase service density, and facilitate cost-effective launches [3, 4, 5]. Companies like SpaceX, OneWeb, and Amazon are planning to deploy large constellations in LEOs, often referred to as mega-constellations [6]. However, in mega-constellations, launching additional satellites does not necessarily increase throughput and could even lead to performance degradation due to severe inter-satellite interference. To tackle this challenge, satellite cooperation concepts such as LEO remote sensing [7] and satellite clusters [8],[9] have been introduced to manage satellite networks effectively and mitigate inter-satellite interference.
|
In this paper, we proposed the mathematical analyses of the performance of satellite cluster networks, where satellites in the cluster area collaborate to serve users, particularly in mega-constellations. To establish the lower and upper bounds of the coverage probability, we first derived the key parameters of the approximated Gamma random variables to handle the compound terms in the SIR and compared the advantages of each in terms of tightness and complexity. Leveraging the distribution of the approximated Gamma random variable, we suggested the lower and upper bounds of the coverage probability, which depend on system parameters. While the results in this work relied on random variable approximations and provided lower and upper bounds, these bounds effectively showed the network’s performance with sufficient tightness to simulation results. Moreover, our analyses not only reduced computational burden through finite operations but also expected to offer insights into the impact of system parameters on the coverage probability in satellite cluster networks.
|
The recent works [17, 18, 19, 20, 21, 22, 23, 24, 25] have utilized the stochastic geometry to evaluate the system-level performance of satellite networks, focusing on scenarios where each user is served by a single satellite at a given time. Some of these works [17, 18, 19] modeled the distribution of satellite locations on the surface of a sphere using a BPP, i.e., the number of satellites distributed in a certain area follows the binomial distribution. In [17], the coverage probability and average achievable rate were evaluated, and guidelines for selecting the system parameters such as the altitudes and the number of frequency channels were proposed. The authors in [18] derived the outage probability considering satellite antenna patterns and practical channels modeled by the shadowed-Rician fading. In addition, the coverage performance was investigated in [19] for a scenario where satellite gateways that serve as relays between the users and the LEO satellites are deployed on the ground.
|
As mega-constellations complicate simulation-based network performance analyses, the need for new tools arises for effective assessment. Stochastic geometry is introduced to mathematically analyze performance, modeling the random behavior of nodes in wireless networks, including base stations and users, through point processes such as binomial point processes (BPPs) or Poisson point processes (PPPs) [10]. Previous studies [11, 12, 13, 14, 15, 16] have conducted stochastic geometry-based analyses for wireless networks using various performance metrics for terrestrial networks. A framework for modeling wireless networks was proposed in [11], addressing the outage probability, network throughput, and capacity. In [12], the distribution of base stations modeled by a PPP was compared to the actual distribution in cellular networks, and the coverage probability and average rate were evaluated. The authors in [13] modeled downlink heterogeneous cellular networks with the PPP and analyzed the coverage probability and average rate. In [14], the coverage probability for uplink cellular networks modeled by the PPP was studied. The authors in [15] derived the downlink coverage probability considering base station cooperation within the PPP framework. A BPP model for a cache-enabled networks was used to characterize the downlink coverage probability and network spectral efficiency in [16].
|
Other works [20, 21, 22, 23] modeled the distribution of satellite locations using a PPP, i.e., the number of nodes is randomly determined according to the Poisson distribution. While it is conventionally applied in an infinite two-dimensional area, the recent finding has demonstrated that PPPs can effectively characterize node distribution even in finite space. In particular, the authors in [20] showed that a PPP could effectively capture the actual Starlink constellation in terms of the number of visible satellites, and derived the coverage probability. In [21], the coverage probability was derived based on the contact distance distribution. The work [22] dealt the problem that determines the optimal satellite altitude to maximize the downlink coverage probability. In [23], a non-homogeneous PPP was used to model the varying satellite density according to latitudes, and the coverage probability and average achievable data rate were derived. Furthermore, the other works [24, 25] analyzed satellite networks considering orbit geometry. In [24], a PPP was employed to model the distribution of satellites in orbits. To capture the geometric characteristics of satellites with orbits that may vary in altitude, a framework using a Cox point process was suggested, and the outage probability was investigated in [25]. From these works, we conclude that point processes can successfully model actual satellite constellations, and analytical results based on stochastic geometry can effectively estimate the actual performance. However, these performance analyses have been conducted solely under the scenario where a single satellite serves a user.
|
C
|
The position of the object itself: For example, [Item 0] is located in the ‘lower-right’ of the entire picture. This word occupies 88 bits, but it is only a rough range; in contrast, an 8×8888\times 88 × 8 map only occupies 64 bits and can more accurately specify its location.
|
This image only includes the rough outline of the original image, which can greatly improve the consistency score. At the same time, although the perceptual quality of this image is poor, the decoder reconstruction process will add certain details, and the perception score of the final decoded image is still acceptable.
|
The map encoder acts as an additional module for LMM encoder, by characterizing the spatial relationship between multiple items. This encoder can support a dynamic number of items to balance performance against bitrate. A three-item situation is shown in Fig. LABEL:fig:mask as an example, each map including the following two aspects:
|
The relationship between objects: For example, [Item 2] is to the bottom of [Item 1], and it is difficult to describe the distance between them; in this case, only spatial information can complete this task.
|
The position of the object itself: For example, [Item 0] is located in the ‘lower-right’ of the entire picture. This word occupies 88 bits, but it is only a rough range; in contrast, an 8×8888\times 88 × 8 map only occupies 64 bits and can more accurately specify its location.
|
C
|
Recent research has extensively applied deep convolutional neural networks [1, 2, 3] and attention mechanisms [4] to enhance the accuracy of PE diagnosis. Concurrently, techniques such as CNN-LSTM [5, 6, 4] have been utilized to consider the relationships between consecutive Computed Tomography (CT) slices, thereby better capturing dependencies among these slices. The most sophisticated model to date, PENet [7], is an end-to-end 3D CNN that leverages multiple CT slices for PE detection. The use of 3D convolutions allows the network to incorporate information from multiple slices during prediction, making the network’s ability to learn global information crucial. This is because the presence of PE is not confined to a single CT slice.
|
Our study is designed to forecast the presence or absence of PE in a patient through the integration of the patient’s chest CTPA image and the corresponding EMR attribute information. This objective is consequently translated into a binary classification task. In this section, we delineate three key elements of our framework: the image-only model, the EMR-only model, and the multimodal fusion module. The architecture of the model is depicted in Figure 1.
|
This study aimed to establish a multimodal deep learning model for diagnosing pulmonary embolism by harnessing information from CT images and EMR data. The experimental results demonstrate that our proposed multimodal model excelled with an AUROC of 94.1%, accuracy of 90.2%, and an F1 score of 90.6%, outperforming all other models compared. The improvement in AUROC compared to the image-based model was 24.2%, the EMR-based model was 3.9%, and the model lacking the cross-modal module was 0.5%. Specifically, we elaborated on a multimodal fusion strategy based on multi-view and cross-modal approaches. The multi-view module was designed to extract features from the spatial, channel, and dimensional aspects of CT images, while the cross-modal module effectively integrated features from both CT images and EMR data. The preliminary results indicated considerable improvements in augmenting model performance and robustness compared to single-modal methods. Implementing our approach allowed the model to thoroughly comprehend and utilize information from diverse data sources in a comprehensive manner, thereby providing robust support to enhance the accuracy and reliability of pulmonary embolism detection.
|
From Table 1, our model emerges superior across all metrics when compared to other state-of-the-art methods. Specifically, in comparison with the single-modality method, our method enhances the Area Under the Receiver Operating Characteristic(AUROC) by up to 0.281, increases the accuracy by 0.346, and boosts the F1 score by 0.240. These improvements suggest that our multimodal approach effectively amalgamates the interrelations between image and text data, compared to models that rely solely on a single data modality. Utilizing two modalities as inputs not only offers a comprehensive interpretation of the data but also optimizes the complementarity between different modalities. When compared to PEfusion, our model exhibits an increase in AUROC, accuracy, and F1 score by 0.005, 0.020, and 0.024, respectively. This underscores our model’s proficiency in feature fusion. The introduced CMAF module adeptly captures the inherent correlations between the two modalities, thereby providing the model with richer information.
|
Despite the proliferation of deep learning-based methods in the field of medical imaging, a significant issue persists, namely the neglect of how clinicians frequently employ multimodal data for collaborative decision-making in diagnosing clinical conditions. This is due to the fact that data from different modalities can enhance each other. In response to this, Tang et al.[8] proposed an unsupervised method that employs a Multiscale Adaptive Transformer to integrate medical image models from two modalities. This method has shown superior performance and generalization ability. Furthermore, the integration of Electronic Medical Record (EMR) data with Computed Tomography (CT) images may present a promising approach. Zhou et al.[9] introduced a multimodal fusion model that combines CT and EMR data for the automated classification of Pulmonary Embolism (PE) cases. Comprised of a CT imaging model, an EMR model, and a multimodal fusion model, their work evidenced the superiority of the multimodal model over-reliance on a single data modality.
|
D
|
MM-WHS dataset. MM-WHS [75] dataset is also unseen in the pre-training, which contains 7 classes including Left
|
Ventricle, whole aorta, Right Ventricle, Left Atrium, myocardium of Left Ventricle, Right Atrium, and Pulmonary Artery. The data splits are also shown in Table 8.
|
BTCV dataset. BTCV [35] dataset contains one background class and thirteen organ classes, i.e., spleen, right kidney, left kidney, gallbladder, esophagus, liver, stomach, aorta, inferior vena cava, portal and splenic veins, pancreas, left and right adrenal glands. Following the previous works [13, 74, 73, 52], we split BTCV [35] dataset into 24 scans for training and 6 scans for validation. It is worth noting that the BTCV [35] dataset is used in pre-training.
|
We further evaluate the settings of the balance parameter λ𝜆\lambdaitalic_λ for the loss functions, as shown in Table. 10. We also report the Dice Score on the BTCV [35] and MM-WHS [75] datasets for evaluation. We set λ𝜆\lambdaitalic_λ as 0.5, 1.0, and 1.5 for ablation studies. As shown in Table. 10, we find that the settings of λ𝜆\lambdaitalic_λ do not matter a lot. Thus, in VoCo, we consider the importance of loss functions equal and set λ𝜆\lambdaitalic_λ as 1.
|
MM-WHS dataset. MM-WHS [75] dataset is also unseen in the pre-training, which contains 7 classes including Left
|
A
|
The analysis of RF data presents new opportunities for adding a new dimension on the input data or even completely addressing localization-type kind of problems such as node positioning [11] or outdoor environment reconstruction. RF signals propagate through the environment and interact with various elements such as buildings and terrain. By analyzing RF data, it becomes possible to infer the characteristics of the surrounding environment and even reconstruct it in digital form. Unlike vision-based methods, RF-based approaches are less affected by environmental conditions and can provide valuable insights into the structural and material properties of objects.
|
In recent years, a diverse range of technologies has been used for achieving environment reconstruction, addressing various challenges and opportunities in urban planning, agriculture, engineering, and robotics domains. One notable study focuses on the digital transformation of urban planning processes, emphasizing the use of 3D spatial data and models to create a city digital twin, enabling more illustrative and comprehensible representations of urban environments [12]. In the agricultural robotics domain, another study proposes a virtual reality (VR) and Kinect-based immersive teleoperation system for navigating unstructured agricultural environments, utilizing real-time 3D reconstruction algorithms to create realistic virtual environments [13]. Additionally, digital twin technology has gained traction in engineering communities, with a study presenting an AI-powered framework for efficient communication and reconstruction of large-scale digital twins using 3D point cloud data [14]. Furthermore, a decentralized framework is proposed for collaborative 3D mapping of outdoor areas using mobile ground robots, demonstrating the reliability and efficiency of real-time 3D LiDAR measurements and peer-to-peer communication strategies [15]. A more resource-heavyweight approach presented in another study leverages signal-to-noise ratio (SNR) measurements from low earth orbit (LEO) communication satellites for real-time 3D city map reconstruction, offering a novel solution to overcome the limitations of traditional passive sensors and enable global-scale mapping [16]. However, despite these advancements, challenges persist in achieving cost-effective, lightweight and scalable 3D map reconstruction for urban environments.
|
However, one of the major challenges in leveraging RF data for outdoor environment reconstruction is the scarcity of large-scale, real-world datasets. Collecting comprehensive RF datasets in diverse outdoor environments is a costly and time-consuming endeavor. To address this issue, researchers have turned to synthetic datasets generated using realistic propagation models. Synthetic datasets offer the advantage of providing labeled data at scale, enabling the training of data-hungry deep learning models without the need for extensive data collection efforts.
|
Despite the potential of synthetic RF datasets, there remains a notable gap in research focused on utilizing deep learning techniques for outdoor environment reconstruction using such datasets. While deep learning has demonstrated remarkable success in various domains, its application to synthetic RF datasets for outdoor environment reconstruction remains relatively unexplored. This paper aims to fill this gap by investigating the effectiveness of deep learning approaches on a selected synthetic RF dataset for reconstructing outdoor environments.
|
The analysis of RF data presents new opportunities for adding a new dimension on the input data or even completely addressing localization-type kind of problems such as node positioning [11] or outdoor environment reconstruction. RF signals propagate through the environment and interact with various elements such as buildings and terrain. By analyzing RF data, it becomes possible to infer the characteristics of the surrounding environment and even reconstruct it in digital form. Unlike vision-based methods, RF-based approaches are less affected by environmental conditions and can provide valuable insights into the structural and material properties of objects.
|
B
|
From a general point of view, flow control problems are characterized by a simulated physics environment spanned over at least two dimensions, possibly including time. The control is performed by an agent that modifies boundary conditions, source terms or other components of the domain in order to optimize a given objective. A notable difficulty is hereby designing a robust, and at the same time efficient environment that is able to cope with a wide range of actions while preserving low and stable runtimes [8]. One way of minimizing the computational cost is lumping the Navier-Stokes equations, the backbone of most fluid mechanics problems, by limiting the dimensionality and restricting its terms to the dominant ones for each problem at hand. This way, it is possible to retain the main features of the flow, while tuning the schemes and discretizations towards higher performances.
|
This library provides self-contained cases for deep reinforcement learning-based flow control. The goal is to provide the community with benchmarks that fall within the range of flow control problems, while following three constraints: (i) be written in Python to ensure a simple coupling with most DRL implementations, (ii) follow the general gym application programming interface [8], and (iii) be cheap enough in terms in CPU usage so that trainings can be performed on a decent computing station. Aligned with the standardized approach of gym, which streamlines environment setup and facilitates a focused exploration of RL research, this library serves as a first step for prototyping flow control algorithms before moving on to larger problems that will require more efficient CFD solvers and, most probably, a CPU cluster.
|
In the present contribution, we lay the first stone of a numerical flow control benchmark library for DRL algorithms to systematically assess methodological improvements on physically and numerically relevant problems. The design of the test cases voluntarily limits the computational cost of the solvers, making this library a first benchmarking step before testing on more complex and CPU-intensive cases.
|
While the present version of the library presents a modest variety of phenomena and control types, its purpose is to grow with new cases proposed by the community, within the constraints detailed in section 2. To this end, issues and pull requests are accepted on the library repository. With the present work, we hope to provide a solid foundation for the development of a community-driven library of fluid dynamics environments, and to foster the development of new control strategies for fluid dynamics.
|
In recent years, the area of deep reinforcement learning-based flow control has undergone a rapid development, with a surge of contributions on topics such as (but not limited to) drag blackuction [1], collective swimming [2] or heat transfers [3]. Unlike traditional methods, deep reinforcement learning (DRL) enables the learning of complex control strategies directly from data, thereby alleviating the effects of local minima and generalizability of algorithm towards other scenarios [4]. Yet, the inherent reproducibility issues of DRL algorithms [5], as well as the variety of computational fluid dynamics (CFD) solvers and the possible variability of environment design among the different actors of the community make it hard to accurately compare algorithm performances, thus hindering the general progress of the field. More, the standard DRL benchmarks (such as the mujoco package [6], or the Atari games from the arcade learning environments (ale) [7]) have a limited interest in the context of benchmarking DRL methods for flow control, as their dynamics, observation spaces, computational requirements and action constraints display substantial differences with those of numerical flow control environments.
|
B
|
\frac{1}{I_{z}}\left(l_{f}F_{Y_{1}}\cos\delta-l_{r}F_{Y_{2}}\right)\end{array}\right]over˙ start_ARG italic_X end_ARG = italic_f ( italic_X , italic_U ) = [ start_ARRAY start_ROW start_CELL italic_u roman_cos italic_φ - italic_v roman_sin italic_φ end_CELL end_ROW start_ROW start_CELL italic_v roman_cos italic_φ + italic_u roman_sin italic_φ end_CELL end_ROW start_ROW start_CELL italic_ω end_CELL end_ROW start_ROW start_CELL italic_a + italic_v italic_ω - divide start_ARG 1 end_ARG start_ARG italic_m end_ARG italic_F start_POSTSUBSCRIPT italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT roman_sin italic_δ end_CELL end_ROW start_ROW start_CELL - italic_u italic_ω + divide start_ARG 1 end_ARG start_ARG italic_m end_ARG ( italic_F start_POSTSUBSCRIPT italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT roman_cos italic_δ + italic_F start_POSTSUBSCRIPT italic_Y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) end_CELL end_ROW start_ROW start_CELL divide start_ARG 1 end_ARG start_ARG italic_I start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT end_ARG ( italic_l start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT italic_F start_POSTSUBSCRIPT italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT roman_cos italic_δ - italic_l start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT italic_F start_POSTSUBSCRIPT italic_Y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) end_CELL end_ROW end_ARRAY ]
|
The aforementioned dynamic bicycle model holds true only when the steering angle of the front wheels is small; hence, it is necessary to impose constraints on the values of the front wheel steering angle.
|
At each discrete time step, the control of the vehicle can be construed as a composite function of acceleration a𝑎aitalic_a and the steering angle δ𝛿\deltaitalic_δ of the front wheels.
|
When the front wheel steering angle δ𝛿\deltaitalic_δ of the vehicle is very small, it can be approximately obtained:
|
Similar to the cost function in lane-change decision, the ego vehicle needs to maintain a high reference speed vrefsubscript𝑣𝑟𝑒𝑓v_{ref}italic_v start_POSTSUBSCRIPT italic_r italic_e italic_f end_POSTSUBSCRIPT, to ensure it can overtake other vehicles while driving. Another important factor is the lateral deviation of ego vehicle from the centerline of the lane y(l∣t)−yref𝑦conditional𝑙𝑡subscript𝑦𝑟𝑒𝑓y(l\mid t)-y_{ref}italic_y ( italic_l ∣ italic_t ) - italic_y start_POSTSUBSCRIPT italic_r italic_e italic_f end_POSTSUBSCRIPT. The designed controller should minimize this deviation to keep the vehicle driving straight. Additionally, for a smooth driving experience, it’s necessary to control the acceleration a𝑎aitalic_a, steering angle δ𝛿\deltaitalic_δ, rate of change in acceleration(jerk) ΔaΔ𝑎\Delta aroman_Δ italic_a, and rate of change in steering angle ΔδΔ𝛿\Delta\deltaroman_Δ italic_δ.
|
C
|
Unrolling networks treat the sparse-view CT reconstruction problem as an optimization task, resulting in a first-order iterative algorithm like gradient descent, which is subsequently unrolled into a deep recurrent neural network in order to learn the optimization parameters and the regularization term. Like post-processing techniques, unrolling networks have been extended to the sinogram domain [52, 56] to perform interpolation task.
|
More recently, Transformers [3, 31] have been introduced into unrolling networks, such as RegFormer [54] and HUMUS-Net [12]. While achieving commendable performance, these methods require more computational resources than traditional CNN-based unrolling networks and incur a significant memory footprint due to linear scaling with the number of unrolling iterations.
|
Unrolling networks, as referenced in [44, 12, 36], exhibit remarkable performance across diverse domains. However, they suffer from slow convergence and high computational costs, as illustrated in Fig. 1, necessitating the development of more efficient alternatives [14].
|
Recent research on unrolling networks has often focused on selecting the representation of the regularization term gradient (i.e. 𝒢𝒢{\mathcal{G}}caligraphic_G in Eq. 4), ranging from conv-nets [7, 56, 44] to more recent attention-based nets [54, 12]. In alignment with this trend, we introduce a non-local regularization block named Incept-Mixer and depicted in, Fig. 3. This block is crafted by drawing inspiration from both the multi-layer perceptron mixer [46] and the inception architecture [45], leveraging the strengths of each: capturing long-range interactions through the attention-like mechanism of MLP-Mixer and extracting local invariant features from the inception block. This design choice is evident in the ablation study (see Tab. 6) where Incept-Mixer outperforms both alternatives.
|
Second, to cut down on the computational costs associated with unrolling networks, we propose to decrease the required iterations for convergence by employing second-order optimization methods such as [21, 30]. We introduce a novel unrolling framework named QN-Mixer.
|
B
|
Table 5: Comparison with existing DP methods. CADS offers the best AIF and disparity estimation quality on our simulated DP captures based on FlyingThings3D scenes, and on our simulated DP captures based on the NYUv2 scenes. Red highlights best, orange highlights second best. † indicates metrics computed over 16 samples since these methods had a slow runtime. For methods where AIF/disparity is not predicted, metrics are marked as N.A.
|
Fine-tuning. To reconstruct real-world captures, we perform fine-tuning of a trained CADNet-RGB model (trained on simulated FlyingThings3D scenes using simulated DP PSFs). We first capture real-world PSFs as mentioned above. The real-world PSFs are used to simulate DP captures based on FlyingThings3D scenes, and CADNet-RGB model weights are fine-tuned on the new captures for 30 epochs. During fine-tuning phase, we train for a variable amount of heteroscedastic noise [7] levels ranging from 0.7%percent0.70.7\%0.7 %–1.5%percent1.51.5\%1.5 %, along with extra data augmentations (random) on brightness, contrast, gamma, and hue. This is done to remove certain sim-to-real mismatches to enable better depth and AIF reconstructions.
|
We compare the performance of CADS with existing learning-based dual-pixel sensing works, namely DPDNet[2], DDDNet[21], Xin et al. [33], Punnapurath et al. [23] and Kim et al. [16]. Since existing works were developed for naive (no-code) dual-pixel sensors, we use the naive DP PSF blur to simulate captures for evaluation. For evaluating CADS, we render using the coded-aperture DP PSF. We use a validation subset of 2k images from the FlyingThings3D dataset to evaluate. The results are shown in Table 3. Most existing works estimate disparity which is related to defocus map by an unknown scale. For evaluation, we use the affine invariant version of MAE (AI(1)) for disparity estimation quality [9] and PSNR for AIF quality222See Supplementary for metric definitions. Note that we use the normalized defocus map output by CADNet for this comparison instead of converting it to a depth map. Works that were designed for unidirectional disparity [21, 33] show poor results when tested on simulated scenes with bidirectional disparity between the DP images. Our proposed CADS outperforms existing methods on the simulated FlyingThings3D dataset. Moreover, CADNet trained on naive DP blurs, referred to as Naive DP in Table 3, also outperforms existing methods. We also perform a comparison on rendered images from NYUv2 dataset[28] and report the performance in the supplementary.
|
Table 5: Comparison with existing DP methods. CADS offers the best AIF and disparity estimation quality on our simulated DP captures based on FlyingThings3D scenes, and on our simulated DP captures based on the NYUv2 scenes. Red highlights best, orange highlights second best. † indicates metrics computed over 16 samples since these methods had a slow runtime. For methods where AIF/disparity is not predicted, metrics are marked as N.A.
|
We test existing DP-sensing works on simulated naive DP captures based on FlyingThings3D scenes and also on simulated naive DP captures based on NYUv2 scenes. Since existing methods were designed for reconstructing from naive DP captures, we created a simulated dataset of naive (no code) DP captures, using the FlyingThings3D dataset scenes and another one using NYUv2 dataset scenes. We compare the following works
|
D
|
Going beyond TPC that allows co-transfer of power and data but are limited by electrically isolated stages, the proposed communication principle is not limited by such stages that physically block the zero-sequence path. Its scalability and flexibility for different voltage levels and system topologies has been investigated in detail. Moreover, it also promises high computational energy efficiency during data processing and learning, that has been bench-marked with respect to binary-activated recurrent neural networks (RNNs) and ANN.
|
Given that the physical layer of the MG and cascaded control structure from the primary control loop to the PWM stage remain the same in both Fig. 1(a) and (c), the key distinction of incorporating NSC instead of relying on the traditional CLC lies in how the dynamic measurements from the remote nodes are efficiently predicted to disregard any exogenous arrival paths or unreliable cyber scenarios, which is the main contribution of this paper. In addition to the elimination of a dedicated communication channel that primarily relies on the request-receive protocol, we leverage the proposed NSC framework to infer real-time information using power flows by deploying SNN at each bus. We firstly cover the background of the biological neuron modeling for SNNs and then discuss the offline initial weight determination of SNN at each bus in the upcoming subsections.
|
Before discussing the weight initialization strategy for SNNs to be deployed at each bus, we firstly uncover the underlying theory behind the entire network to be modeled. The neuron model in Fig. 2(e) is the spike response model (SRM). It is a widely recognized model that effectively represents the characteristics of biological neurons while retaining simplicity, which makes it well-suited for the intended application of the NSC-based coordinated control framework in MGs [24].
|
In this paper, DC MGs have been focused to showcase the principle behind the proposed NSC, whereas it is also potentially applicable to AC systems where the data collection methodology may need to be revised.
|
Extending the single neuron structure to a bidirectional DC/DC converter in Fig. 2(d), the current excitation can either emanate from the input, such as intermittent generation from renewable energy sources, or the output, such as load change or tie-line outage. Since the current flow is in both directions as compared to the case in Fig. 2(b), we decipher both input as well as output dynamics to achieve accuracy in the estimation of information at remote buses. Using multiple data set-points corresponding to different operational scenarios, semantic data collection is performed to assign the initial weights of SNN, which will be discussed later.
|
C
|
The arrival time difference of the signal can be accurately recorded through synchronization process, based on which the terminal positioning can be achieved.
|
In the actual positioning system realization, a challenge is to consider the synchronization bias between different transmitters. After receiving the satellite PPS (pulse per second) signal at each transmitter, a square wave rising edge with a frequency of 10MHz10𝑀𝐻𝑧10MHz10 italic_M italic_H italic_z is adopted to control the transmission of the synchronous sequence. The error of 10MHz10𝑀𝐻𝑧10MHz10 italic_M italic_H italic_z square wave superimposed by the error of 1 PPS signal can be considered to meet the uniform distribution U[0ns,100ns]𝑈0𝑛𝑠100𝑛𝑠U[0ns,100ns]italic_U [ 0 italic_n italic_s , 100 italic_n italic_s ] according to Section III-B.
|
Consider 2-dimensional positioning, while 3-dimensional positioning can be derived in a similar fashion. Assume that transmitters A, B and C are three separate anchor points, which send signals to receiver R, as shown in Fig. 1. The three transmitters are aligned via an unified clock in the form of pulse per second (PPS) from a satellite timing system. The transmitters send synchronization signals to the receiver under an unified clock in a time-division manner at fixed time interval T𝑇Titalic_T. The time-division transmission is controlled by 10MHz10𝑀𝐻𝑧10MHz10 italic_M italic_H italic_z square wave generated by an atomic clock drived by the PPS signal. Based on the synchronization of pilots from the three transmitters, the receiver can obtain the time difference of arrival, from which the receiver location can be estimated.
|
The positioning results are shown in Fig. 18, where the three transmitters are marked as three big blue circles at three known locations. Each solid colored dot denotes the real receiver position, and the asterisk mark of the corresponding color denotes the positioning estimate of the receiver. The positioning performances at nine positions are tested, where ten estimates are obtained for each positioning. Clustering effect of the estimated position can be observed with small internal variance. The average distance from the ten estimated values to the center of the ten positions can reach the level around 2.1159m2.1159𝑚2.1159m2.1159 italic_m. This is due to the fact that the positioning error mainly comes from the misalignment in the rising edge of the transmitter clock in the short time duration.
|
The transmitters are calibrated by the satellite timing module each second, which is achieved by an atomic clock controlling the time division transmission. The atomic clock timing MSE is denoted as σclock2superscriptsubscript𝜎𝑐𝑙𝑜𝑐𝑘2\sigma_{clock}^{2}italic_σ start_POSTSUBSCRIPT italic_c italic_l italic_o italic_c italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. To explore its statistical properties, we test the rising edge error of atomic clock. Experimental platform is shown in Fig. 2. A timing system with an antenna receives the satellite signal, drives the atomic clock to transmit the 10MHz10𝑀𝐻𝑧10MHz10 italic_M italic_H italic_z square waves, which are then sampled by an oscilloscope. The PPS output of an any wave-form generator (AWG) is adopted as a baseline, whose interval from the next rising edge of 10MHz10𝑀𝐻𝑧10MHz10 italic_M italic_H italic_z square waves is characterized as the atomic clock delay. Such delay characterizes the systematic error of the rising edge.
|
B
|
Laplacian pyramid levels K𝐾Kitalic_K is set to 5555, spanning from the actual spatial resolution down to 224224224224.
|
The temporal rectifier leverages features from the video chunks centered around the key frames at the actual frame rate to compute another scaling parameter αtsubscript𝛼𝑡\alpha_{t}italic_α start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and shift parameter βtsubscript𝛽𝑡\beta_{t}italic_β start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT for quality rectification.
|
As part of the modular design, 𝒇s(⋅)subscript𝒇𝑠⋅\bm{f}_{s}(\cdot)bold_italic_f start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( ⋅ ) responds to spatial distortions that arise from or are affected by spatial resizing. Similar to the base quality predictor, we work with the sparse set of M𝑀Mitalic_M key frames, 𝒚={𝒚i}i=0M−1𝒚superscriptsubscriptsubscript𝒚𝑖𝑖0𝑀1\bm{y}=\{\bm{y}_{i}\}_{i=0}^{M-1}bold_italic_y = { bold_italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M - 1 end_POSTSUPERSCRIPT. The difference lies in that we do not further perform spatial resizing, but build a Laplacian pyramid of K+1𝐾1K+1italic_K + 1 levels for each key frame at the actual spatial resolution. For the i𝑖iitalic_i-th key frame and at the k𝑘kitalic_k-th level, we have
|
To reliably assess the perceptual quality of digital videos with great content and distortion diversities, and variable spatial resolutions and frame rates, we propose a modular BVQA model. Our model consists of three modules: a base quality predictor, a spatial rectifier, and a temporal rectifier, responding to the visual content and distortion, spatial resolution, and frame rate changes, respectively. The base quality predictor takes a sparse set of spatially downsampled key frames as input, and produces a scalar as the quality estimate. The spatial rectifier relies on a shallow CNN to process the Laplacian pyramids of the key frames at the actual spatial resolution, and computes the scaling and shift parameters to rectify the base quality score. Similarly, the temporal rectifier relies on a lightweight CNN to process the spatially downsampled video chunks centered around the key frames at the actual frame rate, and computes another scaling and shift parameters for quality rectification. To enhance the modularity of our model, we introduce a dropout strategy during training. At each iteration, we randomly drop out the spatial and/or temporal rectifiers with pre-specified probabilities.
|
Similarly, for the temporal rectifier, we extract an equal number of video chunks to that of the key frames.
|
D
|
Research on RIS has traditionally focused on the verification of pre-designed functions [20]. Next-generation (6G) advancements require the knowledge of user locations for high-directive links. Real-time computing and control of a self-adaptive RIS phase profile based on electromagnetic feedback from complex environments remains an open research topic.
|
The convergence of telecommunications and computer vision, once distinct fields, brings up new opportunities for innovation, namely using computer vision to predict wireless channel dynamics and integrating radio-based sensing to boost computer vision applications. Research in this domain combines wireless communications, computer vision, sensing, and machine learning, enabling a wide range of innovative applications. Addressing this interdisciplinary challenge requires advanced Research Infrastructures (RI) and suitable tools.
|
Telecommunications and computer vision are converging, with significant potential for innovation by leveraging visual data to predict wireless channel dynamics and enhancing computer vision applications through radio-based imaging. This interdisciplinary approach, integrating wireless communications, computer vision, sensing, and ML, opens up various innovative applications.
|
Machine learning (ML) effectively advances learning from data to implement future tasks, leveraging knowledge from existing datasets. In wireless communications, ML improves signal recognition, spectrum sensing, and waveform design tasks [25]. RIS are noted for their potential in boosting network capacity and coverage, especially when using high-frequency waves where obstruction from objects will have a great impact. ML’s role extends to enhancing resource and energy management, security, beamforming, and channel estimation [26]. In multimedia content analysis, ML outperforms traditional computer vision techniques by offering superior object detection and identification, face recognition, and image segmentation [27]. The shift towards analyzing multimedia, including video, audio, and text, through a multimodal approach has shown better performance than individual data analysis methodologies [28].
|
Computer Vision (CV) has progressed in complex tasks like object detection and tracking [21], suggesting its utility in enhancing communications through environmental sensing. RIS-based sensing, especially at frequencies below 6 GHz can, in turn, aid CV applications.
|
D
|
Learning2Listen (L2L) [25] proposes to use a sequence-encoding VQ-VAE [37] to learn the discrete codebook of listener motion.
|
L2L [25] quantizes listener motions into the discrete one-dimensional codebook by VQ-VAE [37], and ELP [35] expands the codebook to a composition of several
|
ELP [35] further utilizes a composition of multiple discrete motion-codewords to represent facial motion in a more fine-grained manner.
|
We retrain PCH [15], RLHG [46] and L2L [25] on ViCo[46]. It should be noted that, source codes of ELP [35] and MFR-Net [21] is unavailable. For MFR-Net [21], we utilize the data from the original paper. For ELP [35], the 3DMM coefficient extraction model used is different from ours, resulting in distinct dimensions of coefficients (e.g., β∈ℝ100T𝛽superscriptℝ100𝑇\beta\in\mathbb{R}^{100T}italic_β ∈ blackboard_R start_POSTSUPERSCRIPT 100 italic_T end_POSTSUPERSCRIPT for ELP [35], β∈ℝ64T𝛽superscriptℝ64𝑇\beta\in\mathbb{R}^{64T}italic_β ∈ blackboard_R start_POSTSUPERSCRIPT 64 italic_T end_POSTSUPERSCRIPT for ours), thus it is not reasonable to directly compare with evaluation data in ELP [35]. Therefore, we only provide its visual comparisons in Appendix C.
|
Although ELP [35] represents the latent space under different emotions in a fine-grained manner through the codebook, what the user can control is still the input emotion rather than the fine-grained motions. In this way, the listener’s response to each emotion may depend on the data distribution of the training set.
|
B
|
What has typically been done in previous work within the maritime domain is to consider constant behaviors for vessels involved in a scenario (Minne, 2017; Pedersen et al., 2022; Torben et al., 2022; Bolbot et al., 2022). Torben et al. (2022) used a Gaussian Process to estimate how CAS scores concerning safety and the COLREG, which guides the selection of scenarios to test the system at hand, based on its confidence level of having covered the parameter space describing the set of scenarios. Zhu et al. (2022) proposed an Automatic Identification System (AIS) based scenario generation method. Here, AIS data was analyzed and used to estimate Probability Density Functions (PDFs) describing the parameters of an encounter, such as distances between vessels, their speeds, and bearings. The PDFs were then used to generate a large number of scenarios for testing CAS algorithms. The goal was to increase the test coverage for such systems, over that which is possible with only expert-designed and real AIS data scenarios. Again, generated vessels all follow constant velocity, which does not always reflect true vessel behavior in hazardous encounters. Furthermore, one can not expect all vessels in a given situation to broadcast information using AIS, making AIS-generated scenarios partially incomplete sometimes.
|
As a side contribution, we argue through proof-of-concept cases that RRT-based planners are beneficial for vessel test scenario generation due to their rapid generation of initially feasible, although not necessarily optimal, trajectories. As we do not necessarily require that obstacle vessels follow optimal trajectories, they provide a viable approach for the fast generation of random vessel scenarios used in CAS benchmarking. RRTs can also be used to generate more realistic ship intention scenarios that can be exploited in intention-aware CAS such as the Probabilistic Scenario-based Model Predictive Control (PSB-MPC) (Tengesdal et al., 2024, 2022). Lastly, the RRTs can be used in frameworks as in (Bolbot et al., 2022) for finding relevant scenarios, where the RRT sampling heuristics can be tailored to the considered navigational factors.
|
Porres et al. (2020) used the Deep Q Network (DQN) for building a scenario test suite, based on using a neural network to score the performance of randomly generated scenarios. The performance is calculated based on geometric two-ship COLREG compliance and the risk of collision, where the score is used to determine if a given scenario is eligible for simulation and test suite inclusion. The approach should, however, be refined to account for more navigational factors such as grounding hazards in the performance evaluation. Recently, Bolbot et al. (2022) introduced a method for finding a reduced set of relevant traffic scenarios with land and disturbance consideration, through Sobol sequence sampling, filtering of scenarios based on risk metrics, and subsequent similarity clustering. Again, constant behavior is assumed for the vessels, but the process of identifying hazardous scenarios shows promise.
|
Trajectory planning is an important aspect of ship autonomy, not only in Collision Avoidance Systems (CAS) for safe and efficient voyage, but also for the safety assurance of the former. To verify CAS safety and compliance with the International Regulations for Preventing Collision at Sea (COLREG) (IMO, 2003), it will be necessary to conduct simulation-based testing in a diverse set of scenarios (Pedersen et al., 2020). The scenarios must cover varying difficulties with respect to grounding hazards or static obstacles, ships with uncertain kinematics and intentions, and environmental disturbances. Here, it will be important to develop methods for generating interesting and hazardous obstacle vessel behavior scenarios with sufficient variety for the CAS testing. Up until now, this has proved to be a challenging problem not yet solved.
|
What has typically been done in previous work within the maritime domain is to consider constant behaviors for vessels involved in a scenario (Minne, 2017; Pedersen et al., 2022; Torben et al., 2022; Bolbot et al., 2022). Torben et al. (2022) used a Gaussian Process to estimate how CAS scores concerning safety and the COLREG, which guides the selection of scenarios to test the system at hand, based on its confidence level of having covered the parameter space describing the set of scenarios. Zhu et al. (2022) proposed an Automatic Identification System (AIS) based scenario generation method. Here, AIS data was analyzed and used to estimate Probability Density Functions (PDFs) describing the parameters of an encounter, such as distances between vessels, their speeds, and bearings. The PDFs were then used to generate a large number of scenarios for testing CAS algorithms. The goal was to increase the test coverage for such systems, over that which is possible with only expert-designed and real AIS data scenarios. Again, generated vessels all follow constant velocity, which does not always reflect true vessel behavior in hazardous encounters. Furthermore, one can not expect all vessels in a given situation to broadcast information using AIS, making AIS-generated scenarios partially incomplete sometimes.
|
B
|
Introducing a well-defined taxonomy categorizing ASR methodologies based on the domains of AM and LM.
|
This article offers an extensive examination of contemporary frameworks within advanced deep learning approaches, spanning the period from 2016 to 2023. These approaches include DTL, DRL, FL, and Transformers, all within the context of ASR. To the best of the authors’ knowledge, there has been no prior research paper that has intricately explored and critically evaluated contributions in the aforementioned advanced DL-based ASR until now.
|
ASR systems often face performance degradation in certain situations due to the "one-model-fits-all" approach. Additionally, the lack of diverse and sufficient training data affects AM performance. To overcome these constraints and improve the resilience and flexibility of ASR systems, advanced DL methodologies such as DTL and it sub-field domain adaptation (DA), DRL, and FL have surfaced. These innovative methodologies collectively address issues concerning knowledge transfer, model generalization, and training effectiveness, offering remedies that expand upon the capabilities of traditional DL models within the ASR sphere. Thus, many research studies have focused on enhancing existing ASR systems by applying the aforementioned algorithms. Figure 4 provides an overview of the current SOTA advanced DL-based ASR and its most useful related schemes in both AM and LM.
|
This paper is structured into six sections. The current section provides an introduction to the paper. Section 2 providing background on AM and LM, and reviewing evaluation metrics and datasets utilized in ASR. Moving forward, Section 3 delves into a comprehensive review of recent advancements in ASR utilizing advanced DL approaches, including Transformers, DTL, FL and DRL. Sections 4 and 5 respectively address the existing challenges and future directions concerning advanced DL-based ASR. Finally, Section 6 presents concluding remarks summarizing the key findings of the paper. Figure 2 presents a structured roadmap, offering a comprehensive guide to assist readers in navigating through the various sections and subsections of the paper.
|
Proposing future directions to enhance the performance of advanced DL-based ASR solutions and predicting the potential advancements in the field.
|
D
|
Following previous real-world SR works [54, 8, 56, 46, 25], we conduct inference on low-quality LR datasets to generate high-quality HR images and evaluate them using no-reference metrics. The scaling factor is 4 for all methods.
|
As shown in Tab. 2, we substitute our API training dataset with several alternatives for comparative analysis: AVC-Train [56], frames randomly selected from the same video source as our API, a collection of I-Frames with IQA selection, and a collection of I-Frames with ICA selection.
|
To validate the effectiveness of our approach, our evaluation is based on AVC-RealLQ [56], which has 46 video clips each with 100 frames.
|
As shown in Tab. 1, our model has the smallest network size, 1.03M parameters, but has SOTA performance in all metrics among all image and video-based methods.
|
Table 1: Quantitative comparisons on AVC-RealLQ [56]. Bold text indicates the best performance. (‘∗*∗’ denotes fine-tune on animation videos from [56])
|
B
|
Since a linear subspace always contains the 0 vector, then ℋ𝐬subscriptℋ𝐬{\mathscr{H}}_{\mathbf{s}}script_H start_POSTSUBSCRIPT bold_s end_POSTSUBSCRIPT is a singleton if and only if 𝒢⟂={0}superscript𝒢perpendicular-to0{\mathscr{G}}^{\perp}=\{0\}script_G start_POSTSUPERSCRIPT ⟂ end_POSTSUPERSCRIPT = { 0 }. This is equivalent to 𝒢=ℋ𝒢ℋ{\mathscr{G}}={\mathscr{H}}script_G = script_H, which means that the linear span of (gk)k∈𝖹subscriptsubscript𝑔𝑘𝑘𝖹(g_{k})_{k\in{\sf Z}}( italic_g start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_k ∈ sansserif_Z end_POSTSUBSCRIPT is dense in the whole space ℋℋ{\mathscr{H}}script_H. Thus, the unique reconstruct of x𝑥xitalic_x solely depends on the sampling kernels (gk)k∈𝖹subscriptsubscript𝑔𝑘𝑘𝖹(g_{k})_{k\in{\sf Z}}( italic_g start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_k ∈ sansserif_Z end_POSTSUBSCRIPT, and not on the input x𝑥xitalic_x itself.
|
The paper is organized as follows. We start in Section II by reviewing the basic knowledge that samples of the form (1) bring about an input signal x𝑥xitalic_x in a general Hilbert space ℋℋ{\mathscr{H}}script_H and without any assumption on the sampling kernels (gk)k∈𝖹subscriptsubscript𝑔𝑘𝑘𝖹(g_{k})_{k\in{\sf Z}}( italic_g start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_k ∈ sansserif_Z end_POSTSUBSCRIPT. We give the basic principle of the POCS algorithm for finding estimates that are consistent with the samples of (1). In Section III, we show how condition (3) leads a specific configuration of POCS algorithm that is more efficient and that will be later shown to have special connections with the pseudo-inversion of S𝑆Sitalic_S. For that purpose, we devote Section IV to reviewing the notion of pseudo-inverse for linear operators in infinite dimension, which is not commonly used knowledge in signal processing. Section V then contains the major mathematical contribution of this article. By starting from a zero initial estimate, we prove that the POCS iteration tends to S†𝐬superscript𝑆†𝐬S^{\dagger}{\mathbf{s}}italic_S start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_s by contraction whenever the sampling configuration theoretically allows a stable consistent reconstruction (which is systematically the case when 𝖹𝖹{\sf Z}sansserif_Z is finite). When the initial estimate is a signal u(0)≠0superscript𝑢00u^{\scriptscriptstyle(0)}\neq 0italic_u start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT ≠ 0, we show that the POCS limit is more generally the signal of the type S†𝐬+vsuperscript𝑆†𝐬𝑣S^{\dagger}{\mathbf{s}}+vitalic_S start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_s + italic_v that is closest to u(0)superscript𝑢0u^{\scriptscriptstyle(0)}italic_u start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT under the constraint v∈𝒜𝑣𝒜v\in{\mathscr{A}}italic_v ∈ script_A and Sv=0𝑆𝑣0Sv=0italic_S italic_v = 0. This is of particular interest when consistent reconstruction is not unique due to insufficient sampling, and one wishes to pick a consistent estimate that is close to a signal guess of statistical or heuristic nature [9]. This reconstruction simultaneously takes care of sampling errors with an action of “noise shaping” in the case of oversampling [27]. In Section VI, we discuss some important aspects of practical implementations. We finally present in Sections VII and VIII the two mentioned examples of application.
|
Although seemingly ideal, this condition turns out to be realized in the time-encoding system of Lazar and Tòth [16], in the case where ℋ=L2(ℝ)ℋsuperscript𝐿2ℝ{\mathscr{H}}=L^{2}({\mathbb{R}})script_H = italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( blackboard_R ) and 𝒜𝒜{\mathscr{A}}script_A is a subspace of bandlimited signals. This was noticed and utilized in [20] to construct an algorithm achieving S†superscript𝑆†S^{\dagger}italic_S start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT in the specific application of [16]. The algorithm was based on a particular application of the method of projection onto convex sets (POCS) [22, 23]. This was later generalized to integrate-and-fire encoding with leakage in [21]. The purpose of the present article is to extract from [20, 21] the most general framework of pseudo-inversion of S𝑆Sitalic_S by successive filtering under the abstract assumption of (3). In this generalization, the content of these references is revisited and reformulated to reach its most fundamental ingredients and obtain a self-sufficient theory that is independent of the applications. Our formalism contains theoretical results as well as efficient techniques of practical implementations. A goal is to propose a new framework of nonuniform sampling schemes for which a pseudo-inverse input reconstruction method by successive filtering is readily available. While this objective is meant to influence the design of future sampling schemes, we also show in this paper an immediate impact of the proposed theory by applying it to two existing sampling/reconstruction schemes: one in multi-channel time encoding [24, 11] and one in the original case of nonuniform point sampling [4]. In these two sampling applications, the authors studied their proposed reconstruction algorithms under specific assumptions of unique reconstruction. In both cases, we show that their own algorithms coincide with our generic POCS algorithm. In this process, we end up pointing previously unknown properties that their algorithm possess, including their ability to achieve perfect reconstruction even in situations where proofs of unique reconstruction are not available, the characteristics of their limit when the sampling is insufficient, and their behavior towards sampling noise. But on the theoretical side, an important role of these two applications is to show that the abstract condition (3) can be found in unexpected situations, using some non-standard technique of signal analysis. In the first example, condition (3) is extracted after some non-trivial reduction of the complex algorithm of [24, 11]. Beyond pointing out the unknown properties of their method, our high-level formalization allows a concise reformulation of it together with an organized presentation of its implementation at the level of discrete-time filters. For a complementary demonstration, the difficulty of the second example is not in the complexity of the sampling system, but in the non-trivial signal theoretic approach that is required. For condition (3) to be realized in this case, the traditional Hilbert space L2(ℝ)superscript𝐿2ℝL^{2}({\mathbb{R}})italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( blackboard_R ) needs to be replaced by the homogeneous Sobolev space H˙1(ℝ)superscript˙𝐻1ℝ\dot{H}^{1}({\mathbb{R}})over˙ start_ARG italic_H end_ARG start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( blackboard_R ) [25, 26]. This only allows us to prove convergence up to a constant component but does lead for the first time to a result of pseudo-inversion of point sampling by successive filtering.
|
where u(0)superscript𝑢0u^{\scriptscriptstyle(0)}italic_u start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT is some initial estimate proposed by the user. While u(0)superscript𝑢0u^{\scriptscriptstyle(0)}italic_u start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT can be obtained by heuristic or statistical means, x^^𝑥\hat{x}over^ start_ARG italic_x end_ARG is an estimate of x𝑥xitalic_x that is guaranteed to be better than u(0)superscript𝑢0u^{\scriptscriptstyle(0)}italic_u start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT and cannot be further improved deterministically. It is also the consistent estimate that is closest to u(0)superscript𝑢0u^{\scriptscriptstyle(0)}italic_u start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT with respect to ∥⋅∥\|\cdot\|∥ ⋅ ∥. The strength of this procedure is that whenever uniqueness of reconstruction is effective, whether one is able to prove it or not, x^^𝑥\hat{x}over^ start_ARG italic_x end_ARG is guaranteed to be the perfect reconstruction of x𝑥xitalic_x. In the case of non-unique reconstruction, the type (10) of reconstruction was first considered by Yen in [2] for the estimation of a bandlimited signal of L2(ℝ)superscript𝐿2ℝL^{2}({\mathbb{R}})italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( blackboard_R ) from a finite number of point samples, with the specific choice of u(0)=0superscript𝑢00u^{\scriptscriptstyle(0)}=0italic_u start_POSTSUPERSCRIPT ( 0 ) end_POSTSUPERSCRIPT = 0. In this case, x^^𝑥\hat{x}over^ start_ARG italic_x end_ARG is the consistent estimate that is closest to 0, and hence of minimum norm. It can be easily shown from the knowledge of (8) that this x^^𝑥\hat{x}over^ start_ARG italic_x end_ARG must be in 𝒢𝒢{\mathscr{G}}script_G. We note here that [17] studied the case where consistent reconstruction is constrained to a linear subspace that may be different from 𝒢𝒢{\mathscr{G}}script_G.
|
As mentioned in the introduction, the objective of this paper is not to study the question of unique reconstruction of an input x𝑥xitalic_x from its samples. The goal is to perform the best possible approximation of x𝑥xitalic_x from given samples 𝐬𝐬{\mathbf{s}}bold_s, whatever they are. While the term of “best possible” would require some definition, it is at least intuitive that any reconstruction of x𝑥xitalic_x that is not consistent with its samples cannot be optimal. This idea is in fact rationally supported by the following property.
|
D
|
𝐘EB≐[𝐘EB(1),𝐘EB(2)]approaches-limitsubscript𝐘𝐸𝐵superscriptsubscript𝐘𝐸𝐵1superscriptsubscript𝐘𝐸𝐵2\mathbf{Y}_{EB}\doteq[\mathbf{Y}_{EB}^{(1)},\mathbf{Y}_{EB}^{(2)}]bold_Y start_POSTSUBSCRIPT italic_E italic_B end_POSTSUBSCRIPT ≐ [ bold_Y start_POSTSUBSCRIPT italic_E italic_B end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT , bold_Y start_POSTSUBSCRIPT italic_E italic_B end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT ].
|
But the channel probing from a node with more antennas to another node with less antennas should generally result in a larger CSsubscript𝐶𝑆C_{S}italic_C start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT in the regime of high power. This is because
|
We will assume ψA≫nAmuch-greater-thansubscript𝜓𝐴subscript𝑛𝐴\psi_{A}\gg n_{A}italic_ψ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ≫ italic_n start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT and ψB≫nBmuch-greater-thansubscript𝜓𝐵subscript𝑛𝐵\psi_{B}\gg n_{B}italic_ψ start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ≫ italic_n start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT so that the channel estimation errors at all nodes based on the public pilots are negligible as explained later.
|
We consider a MIMO channel between two legitimate nodes A and B (Alice and Bob) in the presence of an Eavesdropper (Eve). The numbers of antennas on these nodes are respectively nAsubscript𝑛𝐴n_{A}italic_n start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, nBsubscript𝑛𝐵n_{B}italic_n start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT and nEsubscript𝑛𝐸n_{E}italic_n start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT. The channel response matrices from Alice to Bob and from Bob to Alice are denoted by 𝐇BAsubscript𝐇𝐵𝐴\mathbf{H}_{BA}bold_H start_POSTSUBSCRIPT italic_B italic_A end_POSTSUBSCRIPT and 𝐇ABsubscript𝐇𝐴𝐵\mathbf{H}_{AB}bold_H start_POSTSUBSCRIPT italic_A italic_B end_POSTSUBSCRIPT respectively, and the channel response matrices from Alice to Eve and from Bob to Eve are denoted by 𝐆Asubscript𝐆𝐴\mathbf{G}_{A}bold_G start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT and 𝐆Bsubscript𝐆𝐵\mathbf{G}_{B}bold_G start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT respectively. Note that all channels are flat-fading within the bandwidth or subcarrier of interest. Also note that all channels are assumed to be block-wise fading, i.e., all channel matrices are constant within each coherence period but vary independently from one coherence period to another.
|
Note that the matrices with the superscript (1)1{}^{(1)}start_FLOATSUPERSCRIPT ( 1 ) end_FLOATSUPERSCRIPT are associated with the public pilots, and those with (2)2{}^{(2)}start_FLOATSUPERSCRIPT ( 2 ) end_FLOATSUPERSCRIPT are associated with the random symbols. More specifically, we can write
|
D
|
Directly factorizing speech into different subspaces does not guarantee the disentanglement of speech. In this section, we introduce some techniques to achieve better speech attribute disentanglement: 1) information bottleneck, 2) supervision, 3) gradient reverse, and 4) detail dropout. Please refer to Appendix B.1 for more training details.
|
Inspired by [16, 17], to force the model to remove unnecessary information (such as prosody in content subspace), we construct the information bottleneck in prosody, content, and acoustic details FVQ by projecting the encoder output into a low-dimensional space (i.e., 8-dimension) and subsequently quantize within this low-dimensional space. This technique ensures that each code embedding contains less information, facilitating information disentanglement [32, 46]. After quantization, we will project the quantized vector back to original dimension.
|
We have the following considerations: 1) empirically, we find that the codec tends to preserve undesired information (e.g., content, prosody) in acoustic details subspace since there is no supervision; 2) intuitively, without acoustic details, the decoder should reconstruct speech only with prosody, content and timbre, although in low-quality. Motivated by them, we design the detail dropout by randomly masking out zdsubscript𝑧𝑑z_{d}italic_z start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT during the training process with probability p𝑝pitalic_p. With detail dropout, we achieve the trade-off of disentanglement and reconstruction quality: 1) the codec can fully utilize the prosody, content and timbre information to reconstruct the speech to ensure the decouple ability, although in low-quality; 2) we can obtain high-quality speech when the acoustic details are given.
|
To effectively generate speech with better quality, similarity and prosody, we propose a TTS system with novel factorized diffusion models to generate natural speech in a zero-shot way. Specifically, 1) we introduce a novel neural speech codec with factorized vector quantization (FVQ), named FACodec, to decompose speech waveform into distinct subspaces of content, prosody, timbre, and acoustic details and reconstruct speech waveform with these disentangled representations, leveraging information bottleneck [16, 17], various supervised losses, and adversarial training [18] to enhance disentanglement; 2) we propose a factorized diffusion model, which generates the factorized speech representations of duration, content, prosody, and acoustic detail, based on their corresponding prompts. This design allows us to use different prompts to control different attributes. The overview of our method, referred to NaturalSpeech 3, is shown in Figure 1.
|
Avoiding the information leak (such as the prosody leak in content) can enhance disentanglement. Inspired by [47], we adopt adversarial classifier with the gradient reversal layer (GRL) [48] to eliminate undesired information in latent space. Specifically, for prosody, we apply phoneme-GRL (i.e., GRL layer by predicting phoneme labels) to eliminate content information; for content, since the pitch is an important aspect of prosody, we apply F0-GRL to reduce the prosody information for simplicity; for acoustic details, we apply both phoneme-GRL and F0-GRL to eliminate both content and prosody information. In addition, we apply speaker-GRL on the sum of zp,zc,zdsubscript𝑧𝑝subscript𝑧𝑐subscript𝑧𝑑z_{p},z_{c},z_{d}italic_z start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT , italic_z start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT , italic_z start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT to eliminate timbre.
|
A
|
Verifying the properties of multiagent systems has been an important research area for several decades [pallottino2006probabilistic, doan2014verifying, kouvaros2019formal].
|
Meanwhile, backward reachability analysis algorithms, such as BReach-LP [rober2022backward] (which can handle this scenario as a single-agent problem), prove the agents will not collide because although multiple disjoint paths might lead to the target set, these paths must remain within the target set.
|
Meanwhile, neural network (NN) control policies are becoming a key component of many state-of-the-art multiagent systems, such as for swarming [tolstaya2020learning] and autonomous driving [palanisamy2020multi], yet the above verification algorithms cannot deal with these NNs.
|
Figure 1: Complex interactions between agents present challenges in formal safety verification. This analysis is further complicated when the agents are using NN control policies.
|
Obtaining formal guarantees (e.g., for collision avoidance, robustness) for closed-loop systems with NN controllers, i.e., neural feedback loops (NFLs), remains challenging primarily due to the high dimensionality and nonlinearities of NNs.
|
B
|
Pressure at the intake manifold pimsubscript𝑝imp_{\text{im}}italic_p start_POSTSUBSCRIPT im end_POSTSUBSCRIPT;
|
It is assumed that the in-cylinder pressure during the intake stroke is equal to pimsubscript𝑝imp_{\text{im}}italic_p start_POSTSUBSCRIPT im end_POSTSUBSCRIPT.
|
Temperature at the intake manifold Timsubscript𝑇imT_{\text{im}}italic_T start_POSTSUBSCRIPT im end_POSTSUBSCRIPT; and
|
Pressure at the intake manifold pimsubscript𝑝imp_{\text{im}}italic_p start_POSTSUBSCRIPT im end_POSTSUBSCRIPT;
|
Timsubscript𝑇imT_{\text{im}}italic_T start_POSTSUBSCRIPT im end_POSTSUBSCRIPT [°CarcdegreeC\mathrm{\SIUnitSymbolDegree}\mathrm{C}° roman_C]
|
B
|
This paper is focused on chance constraints imposed at discrete epochs, although there are studies that extend the concept to continuous-time chance constraints [23].
|
This paper is focused on chance constraints imposed at discrete epochs, although there are studies that extend the concept to continuous-time chance constraints [23].
|
A simple yet versatile form of state chance constraints is an intersection of hyperplane constraints:
|
The presented formulation exploits the Markovian property of the system and incorporates various chance constraints, including state-triggered chance constraints.
|
Fig. 3(b) shows the satisfaction of the state chance constraints under the optimized policy.111Since chance constraints are imposed in discrete time, constraint violation may occur momentarily in-between constrained epochs. Constraint violations in Fig. 3(b) correspond to dynamically sensitive perilune passages.
|
B
|
The Controllability Gramian serves as a significant mathematical construct that offers vital insights into the control characteristics of a network [16, 17, 18]. Utilizing the Controllability Gramian, we can quantitatively assess the ease of transitioning from one state to another, taking into account the necessary control energy as defined in equation (4).
|
The Controllability Gramian serves as a significant mathematical construct that offers vital insights into the control characteristics of a network [16, 17, 18]. Utilizing the Controllability Gramian, we can quantitatively assess the ease of transitioning from one state to another, taking into account the necessary control energy as defined in equation (4).
|
In summary, when partitioning the Laplacian matrix L𝐿Litalic_L of an undirected connected graph, as demonstrated in equation (2), the matrix A𝐴Aitalic_A is revealed to be positive definite, ensuring the system’s stability. This stability enables the computation of the Controllability Gramian 𝒲𝒲\mathcal{W}caligraphic_W, which serves as a valuable measure of controllability in terms of energy-related quantification. It also facilitates the derivation of various controllability statistics [16, 17, 18]. Some of these statistics are further discussed below.
|
We introduce a novel graph embedding—representing graphs as vectors— called CTRL, which is based on the control properties of networks defined on graphs, including meaningful metrics of controllability such as the spectrum of the Gramian matrix.
|
For the system delineated in equation (3), the infinite horizon controllability Gramian is defined as follows:
|
D
|
Furthermore, Xie et al. [10] established a task-oriented semantic communications system for machine translation and visual question-answering tasks by fusing textual and visual semantic features. The speech recognition and speech synthesis tasks are performed in an efficient speech semantic transmission scheme by converting the speech input into the task-related semantic features [11]. In [12], Xu et al. proposed reinforcement learning-enabled semantic communications for scene classification in unmanned aerial systems. Zhang et al. investigated a semantic communication system for extended reality (XR) tasks by transmitting highly compressed semantic information to reduce network traffic. In [13], A unified multimodal multi-task semantic communication architecture, named U-DeepSC, has been developed by sharing trainable parameters amongst various tasks to reduce the redundancy of semantic features and accelerate the inference process.
|
In this paper, a robust semantic communication system for speech transmission, named Ross-S2T, is proposed. We argue that existing research works on task-oriented semantic communications for speech only extract textual semantic features constrained to the source language, i.e., shallow semantic features, encouraging the exploration of deep semantic features spanning various languages. Moreover, the intractable semantic impairments inherent in the corrupted speech are investigated. In this context, the speech-to-text translation (S2TT) task is considered in semantic transmission scenarios with corrupted speech input. The contributions of this paper are summarized as follows:
|
In this section, we introduce robust semantic communication systems for speech transmission and consider S2TT as the transmission goal. The considered system aims to address two primary challenges. The first challenge is to deliver E2E semantic exchange and achieve efficient transmission from speech in the source language to text in the target language. The second one is to devise a semantic impairment suppression mechanism to contend with semantic impairments within the corrupted speech. To this end, the novel deep semantic codec mechanism is established to facilitate speech transmission for S2TT, and the deep semantic compensator is first developed to compensate for the complicated semantic impairments.
|
In this paper, we study the robust semantic communications for speech transmission, named Ross-S2T, to support end-to-end speech-to-text translation (S2TT). Particularly, a deep semantic encoder is developed to learn textual semantic features related to another language from the clear speech, which enables the deep semantic exchange to achieve S2TT at the receiver. Moreover, a generative adversarial network (GAN)-enabled deep semantic compensator and a probe-aided compensator are tailored for corrupt speech scenarios by estimating the impaired semantic information and attaining as accurate deep semantic features as possible. Simulation results demonstrated the superiority of the proposed Ross-S2T to serve S2TT tasks and suppress semantic impairments.
|
A semantic communication system for S2TT in the context of clear speech, named DeepSC-S2T, is developed by utilizing a deep semantic encoder to extract textual semantic features related to the target language from speech in the source language.
|
A
|
UAV, based on a utility map described by a Gaussian Mixture Model (GMM). The proposed strategy, based on
|
UAV, based on a utility map described by a Gaussian Mixture Model (GMM). The proposed strategy, based on
|
The uncertainty map is a nonnegative function h:ℝ2→ℝ0+:ℎ→superscriptℝ2subscriptsuperscriptℝ0h:\mathbb{R}^{2}\rightarrow\mathbb{R}^{+}_{0}italic_h : blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT → blackboard_R start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT that
|
Model Predictive Control (MPC), adopts a formulation that promotes the exploration of the map by penalizing
|
model of the vehicle dynamics. Moreover, the sets 𝒳𝒳\mathcal{X}caligraphic_X and 𝒰𝒰\mathcal{U}caligraphic_U constitute the
|
C
|
(RISs Assist Probability) In the depicted scenario in Fig. 2, the distances of the BS-user, BS-RIS, and RIS-user links are ξ𝜉\xiitalic_ξ, s𝑠sitalic_s, and r𝑟ritalic_r, respectively. We introduce the angle between the BS-user link and the RIS-user link as ϑitalic-ϑ\varthetaitalic_ϑ, and define ψ=2π−ϑ𝜓2𝜋italic-ϑ\psi=2\pi-\varthetaitalic_ψ = 2 italic_π - italic_ϑ. Since we assume that there is always a LoS link between the BS and RISs, the probability that the RIS can provide a reflection link for the BS and the user is PLoS(r)subscript𝑃𝐿𝑜𝑆𝑟P_{LoS}(r)italic_P start_POSTSUBSCRIPT italic_L italic_o italic_S end_POSTSUBSCRIPT ( italic_r ). Employing location-dependent thinning, which preserves the RIS PPP ΦRsubscriptnormal-Φ𝑅\Phi_{R}roman_Φ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT with the probability PLoS(r)subscript𝑃𝐿𝑜𝑆𝑟P_{LoS}(r)italic_P start_POSTSUBSCRIPT italic_L italic_o italic_S end_POSTSUBSCRIPT ( italic_r ), we obtain an inhomogeneous PPP ΦRLsuperscriptsubscriptnormal-Φ𝑅𝐿\Phi_{R}^{L}roman_Φ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_L end_POSTSUPERSCRIPT with a density of λRL=λR⋅PLoS(r)superscriptsubscript𝜆𝑅𝐿normal-⋅subscript𝜆𝑅subscript𝑃𝐿𝑜𝑆𝑟\lambda_{R}^{L}=\lambda_{R}\cdot P_{LoS}(r)italic_λ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_L end_POSTSUPERSCRIPT = italic_λ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT ⋅ italic_P start_POSTSUBSCRIPT italic_L italic_o italic_S end_POSTSUBSCRIPT ( italic_r ). RISs in ΦRLsuperscriptsubscriptnormal-Φ𝑅𝐿\Phi_{R}^{L}roman_Φ start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_L end_POSTSUPERSCRIPT can assist the communication between the BS and the user.
|
Given that the distance between a user and its serving BS is ξ𝜉\xiitalic_ξ, the probability that the serving BS and the user are associated through the reflected LoS link is
|
(Association Criterion) Each user communicates with its serving BS through the link providing the highest signal power, either a direct link or a reflected link,
|
Proof: Since the multiplicative path loss of the RIS reflection link is several orders of magnitude larger than that of the direct link, the association criterion degenerates. When a direct LoS link between the user and the serving BS exists, there is a direct association. On the other hand, when the direct link is NLoS and a LoS reflection link exists, the association criterion is an association through RIS reflection.
|
where T𝑇Titalic_T denotes the set threshold, and γ𝛾\gammaitalic_γ denotes the SINR of the associated link, which could be either a direct link or a reflected link.
|
B
|
Fig. 6 plots the effect of different codebook sizes on the beam training overhead, where the training overhead is defined as the number of time slots required for scanning during training. The exhaustive approach traverses the entire beam space and the training time is proportional to the codebook size, resulting in a very high training overhead. HMB training is at the logarithmic level with a training time of Q=BL=O(BlogMN)𝑄𝐵𝐿𝑂𝐵log𝑀𝑁Q=BL=O(B\mathrm{log}MN)italic_Q = italic_B italic_L = italic_O ( italic_B roman_log italic_M italic_N ), which can significantly reduce the training overhead compared to exhaustive beam training. Although the training overhead of EIMB is low in the figure, it presents a much lower accuracy.
|
Fig. 4 plots the beam identification accuracy versus SNR when considering soft and hard decisions. where only "HMB" training uses the soft decision, and the others use the hard decision of threshold comparison. It can be seen that our proposed HMB training method has the optimal performance, especially when the SNR is relatively small. Specifically, when the SNR = 10dB and the number of beams B=32𝐵32B=32italic_B = 32, the soft decision can improve the accuracy by 96.9%. This is because when the SNR is very low, the intensity of the signal power can be of the same order as the noise power, or even submerged in the noise. Thus the threshold of hard decisions needs to be determined more accurately and adaptively, but it is difficult. on the contrary, the soft decision is based on relative value comparisons, which does not need to determine the threshold and is less affected by noise as well. In addition, when the SNR is higher than 20dB, the HMB codebook can improve the accuracy by 22% over the basis of EIMB codebook, because the equal interval method has a fixed leakage interference, while the randomness of hashing adds a random perturbation to the leakage interference between sub-beams, so that the effect of this interference on the subsequent decision can be reduced.
|
Specifically, we first construct the near-field single-beam training codebook, which maintains the interference between the training beams as small as possible. Further, for each BS, we use hash functions and jointly design the antenna responses to construct HMB codebook. At each time slot, each BS selects one multi-arm beam codeword to transmit the signal, and users record their received signals until all the predefined codewords in the HMB codebook have been traversed. Finally, soft decisions and voting based on the received signal power are applied to obtain the aligned beam. Simulation results show that our proposed near-field HMB training method can significantly improve the identification accuracy of near-field beam training to 96.4% of the exhaustive beam training method while reducing the training overhead to the logarithmic level. Further, we validate its applicability under the far-field scenario.
|
Fig. 3 plots the effect of SNR on the identification accuracy. With the same simulation setup, we use exhaustive, EIMB training with the near-field codebook and exhaustive training with the DFT codebook ("Exhaustive-DFT") as the baseline. Firstly, it can be seen that with increasing SNR, the influence of noise becomes smaller and the identification accuracy of all beam training methods gradually increases; under the exhaustive beam training, the accuracy with the near-field codebook converges to 1, while that with the far-field codebook is significantly lower, which confirms the effectiveness of the designed codebook in near-field conditions.
|
In this paper, the HMB training method was proposed for the near-field and verified to be applicable to the far-field as well. Firstly, by exploiting the polar domain sparse property of the near-field steering vectors, we minimized the projection between the vectors at different sampling points and constructed the training beams for the near field. To further improve the performance of beam training, we use hash functions to generate multi-arm beams and employ the soft decision and voting mechanism to obtain the best-aligned codeword to maximize the received SNR. Simulation results show that our proposed beam training method has maintained stable and satisfactory performance in terms of beam identification accuracy, reaching 96.4% out of the exhaustive training performance while ensuring that the training overhead is significantly reduced to the logarithmic level.
|
D
|
J. K. Author, “Title of paper,” in Abbreviated Name of Conf., City of Conf., Abbrev. State (if given), Country, year, pp. xxxxxx.
|
J. K. Author, “Title of dissertation,” Ph.D. dissertation, Abbrev. Dept., Abbrev. Univ., City of Univ., Abbrev. State, year.
|
J. K. Author, “Title of thesis,” M.S. thesis, Abbrev. Dept., Abbrev. Univ., City of Univ., Abbrev. State, year.
|
J. K. Author, “Title of report,” Abbrev. Name of Co., City of Co., Abbrev. State, Country, Rep. xxx, year.
|
Name of Manual/Handbook, x ed., Abbrev. Name of Co., City of Co., Abbrev. State, Country, year, pp. xxx–xxx.
|
B
|
One-bit radar, characterized by the use of one-bit analog-to-digital converters (ADC), has witnessed significant advancements in radar processing and imaging domains in recent years [1, 2, 3, 4, 5, 6]. The primary advantages of one-bit sampling include simplified hardware requirements, reduced data volume, and lower power consumption, making it highly suitable for several applications, particularly on small platforms.
|
One-bit radar, characterized by the use of one-bit analog-to-digital converters (ADC), has witnessed significant advancements in radar processing and imaging domains in recent years [1, 2, 3, 4, 5, 6]. The primary advantages of one-bit sampling include simplified hardware requirements, reduced data volume, and lower power consumption, making it highly suitable for several applications, particularly on small platforms.
|
Although the sampling process potentially results in information loss, recent studies have demonstrated that this can be effectively mitigated through advanced signal processing techniques. In some instances, these methods can even enhance the overall system performance, for example, through higher sampling rates [7]. Moreover, one-bit radar has proven capable of performing all the functions of traditional high-bit radar, including direction-of-arrival (DOA) estimation [8, 9, 10, 11], range and Doppler estimation [12, 13, 14, 15], detection [16, 17], tracking [18], and imaging [19, 20]. Consequently, one-bit radar is emerging as an important development direction in the radar field, with profound implications for the design and application of future radar systems, particularly in the context of efficient and accurate target detection.
|
In this study, we derived a novel Rao’s test for one-bit target detection in MIMO radar systems operating in colored noise environments, generalizing our prior work [7]. The detector is designed as a weighted matched filter, with weights derived from orthant probabilities tied to noise covariance matrix elements. This approach shows enhanced robustness and significant performance gains in colored noise scenarios compared to the white noise detector [7]. Through comprehensive theoretical analysis, we obtained closed-form approximations for both the null and non-null distributions, enabling accurate calculations of false alarm and detection probabilities. We also assessed the impact of noise covariance matrix mismatch, highlighting how it increases the false alarm probability and providing the necessary adjustments to maintain the CFAR property. The analysis of the non-null distribution revealed that performance degradation due to covariance mismatch can be quantified by a decrease in the non-centrality parameter of a chi-squared distribution. Simulation results confirmed the effectiveness and practical applicability of the proposed detector in realistic radar detection scenarios.
|
Development of Rao’s Test for One-Bit Target Detection in Colored Noise: This paper extends our previous work on a white noise detector, as documented in [7]. To the best of our knowledge, this is the first study in the field of one-bit radar processing that takes into account colored noise. This advancement marks a significant step forward in enhancing the applicability and accuracy of one-bit radar systems.
|
B
|
Let ϕ∈[−π,π]italic-ϕ𝜋𝜋\phi\in[-\pi,\pi]italic_ϕ ∈ [ - italic_π , italic_π ] and θ∈[−π/2,π/2]𝜃𝜋2𝜋2\theta\in[-\pi/2,\pi/2]italic_θ ∈ [ - italic_π / 2 , italic_π / 2 ] denote the azimuth and elevation angles, respectively, of a signal arriving at the BS w.r.t. its reference position.
|
Next, we derive the effective antenna gain for each 6DMA surface, which depends on the signal arriving angles (θ,ϕ)𝜃italic-ϕ(\theta,\phi)( italic_θ , italic_ϕ ) as well as the rotation 𝐮bsubscript𝐮𝑏\mathbf{u}_{b}bold_u start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT of the 6DMA surface in general. In addition, it is heavily dependent on the radiation pattern of each antenna, which characterizes the antenna radiation power distribution over different directions[30].
|
\theta)]^{T}.bold_f = [ roman_cos ( italic_θ ) roman_cos ( italic_ϕ ) , roman_cos ( italic_θ ) roman_sin ( italic_ϕ ) , roman_sin ( italic_θ ) ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT .
|
The pointing vector corresponding to the direction (θ,ϕ)𝜃italic-ϕ(\theta,\phi)( italic_θ , italic_ϕ ) is thus defined as
|
Next, we define the effective gain of each antenna of the b𝑏bitalic_b-th 6DMA surface in the scale of dBi in terms of the local-CCS signal angles (θ~b,ϕ~b)subscript~𝜃𝑏subscript~italic-ϕ𝑏(\tilde{\theta}_{b},\tilde{\phi}_{b})( over~ start_ARG italic_θ end_ARG start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT , over~ start_ARG italic_ϕ end_ARG start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ) as A(θ~b,ϕ~b)𝐴subscript~𝜃𝑏subscript~italic-ϕ𝑏A(\tilde{\theta}_{b},\tilde{\phi}_{b})italic_A ( over~ start_ARG italic_θ end_ARG start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT , over~ start_ARG italic_ϕ end_ARG start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ) (to be specified in Section V based on the practical antenna radiation pattern).
|
C
|
A continuous controller u:→nmu:{}^{n}\rightarrow{}^{m}italic_u : start_FLOATSUPERSCRIPT italic_n end_FLOATSUPERSCRIPT → start_FLOATSUPERSCRIPT italic_m end_FLOATSUPERSCRIPT which satisfies (22) for all x∈𝒞𝑥𝒞x\in\mathcal{C}italic_x ∈ caligraphic_C renders 𝒞𝒞\mathcal{C}caligraphic_C forward-invariant under the dynamics (21) [13, Theorem 4].
|
A continuous controller u:→nmu:{}^{n}\rightarrow{}^{m}italic_u : start_FLOATSUPERSCRIPT italic_n end_FLOATSUPERSCRIPT → start_FLOATSUPERSCRIPT italic_m end_FLOATSUPERSCRIPT which satisfies (22) for all x∈𝒞𝑥𝒞x\in\mathcal{C}italic_x ∈ caligraphic_C renders 𝒞𝒞\mathcal{C}caligraphic_C forward-invariant under the dynamics (21) [13, Theorem 4].
|
A common way to synthesize controllers that render 𝒞𝒞\mathcal{C}caligraphic_C forward invariant is via a parametric QP [1]. To this end, we consider a single-integrator being driven by the CBF constraint (22) and actuator constraints:
|
One class of systems for which there have been results on stability and safety of systems driven by parametric optimization-based controllers are those coming from CBFs and control Lyapunov functions (CLFs) [1]. In these works, CLF and CBF constraints are jointly enforced in a state-dependent quadratic program (QP). To guarantee feasibility of the QP when the CLF and CBF inequalities cannot be jointly satisfied, the stability is commonly relaxed by introducing a slack variable. This relaxation results in a lack of stability guarantee even for arbitrarily large penalties on the slack variable [12]. Recent work, [15], studied a variant of the CLF-CBF QP controller and demonstrates how to tune the penalty parameter and how to estimate the basin of attraction of the origin.
|
We are interested in studying a continuous-time LTI system being driven by an parametric optimization-based controller. We say that the optimization problem is parametric since it is a function of the state. Specifically, we look at parametric projection-based controllers. More concretely, for A∈,n×nB∈,n×mu⋆:→n,mk:→n,mg:×n→mpA\in{}^{n\times n},B\in{}^{n\times m},u^{\star}:{}^{n}\rightarrow{}^{m},k:{}^{%
|
B
|
To address these problems, in this paper, we propose a novel beam training method that exhibits low complexity as well as high accuracy. Specifically, we consider the uplink multi-RIS-assisted multi-user mmWave communication system. Beam training on the BS side is not mentioned due to space limitations, but hashing beam training is applicable to arbitrary multi-antenna arrays. Taking the RIS-user link as an example, our method works as follows: In each time slot, each user transmits a pilot signal, and we construct the receive hashing multi-arm beams at the RISs. The multiple RISs then reflect the signals to the BS. Assuming that the BS can distinguish the signals of different users from the received superimposed signal power, we design a demultiplexing algorithm based on the soft decision and a multi-round voting mechanism, to determine the aligned beams of different RISs to users. It’s worth noting that we choose independent hash functions for each RIS to generate multi-arm beams, which ensures the minimal correlation between different RIS reflection links.
|
In this paper, a HMB training method is proposed for multi-RIS-assisted multi-user communication systems. The proposed method utilizes independent hash functions to generate multi-arm beams. It effectively demultiplexes the signals reflected from different RISs by employing the soft decision on the received signal power. Further, we design a multi-round voting mechanism to obtain the aligned direction. Simulation results demonstrate the robustness and effectiveness of our proposed method in beam identification accuracy. Compared to existing methods, our approach achieves a significant improvement in accuracy of at least 20%. Furthermore, our method ensures that the training overhead remains manageable even with an increasing number of RISs and users, as it remains at the logarithmic level.
|
Furthermore, the randomness of hash functions, the soft decision, and the multi-round voting design can improve the identification accuracy. Then, the proposed method significantly reduces the training complexity, since it allows for simultaneous training of multiple RISs and uses multi-arm beams as well. Simulation results show the outstanding performance of our proposed beam training method in terms of both high identification accuracy and low training overhead compared to existing methods.
|
Fig. 5 plots the relationship between the number of directions and the training overhead. We fixed the SNR to 5 dB and the identification accuracy to 60%. While the exhaustive beam training, hierarchical beam training and the EIMB training methods train alternately, our proposed HMB training method trains simultaneously, with the complexity not increasing with the increase in the number of RISs or users. It can be seen that the complexity of the HMB training method is on the logarithmic level, which greatly reduces the training overhead of the traditional methods. It is worth noting that although the hierarchical training method has a lower complexity at an accuracy of 60%, it was shown in the last figure that this method limits the accuracy to 75%.
|
In recent years, various beam training techniques and algorithms have been proposed, including the exhaustive beam training[5], hierarchical beam training[6, 7], equal interval multi-arm beam (EIMB) training method[4], and two-timescale-based beam training[8]. The exhaustive beam training method searches all possible beam directions at both the transmitter and receiver[5], resulting in significant delays and exponential complexity[9]. The hierarchical beam training method employs a multi-stage approach, dividing the beam space into two halves at each stage until the desired resolution is achieved, which offers a lower complexity but suffers from inherent drawbacks. That is, using wide beams in early layers reduces the beamforming gain, leading to identification errors. Moreover, these errors accumulate at subsequent subdivided beam layers[10]. The EIMB training method employs a predetermined codebook and gradually narrows down the search space through multiple rounds of training until it finds the aligned direction[4]. However, it depends on the results of the first round and the fixed beam composition method introduces leakage interference that is difficult to eliminate, which may limit the accuracy of beam identification in complex situations to some extent. Further, existing methods cannot support simultaneous training of multiple transmitters or receivers and instead need to take turns in a sequential manner, resulting in sub-optimal performance.
|
B
|
Gupta et. al[18] proposes a calibration metric and loss to calibrate the GBC detection models on small dataset.
|
We posit that existing SOTA techniques for GBC detection in US images exhibit suboptimal accuracy and generalization performance. Consequently, we advocate for a paradigm shift toward video-based GBC detection for US. Also, the problem of US video-centric detection of GBC with machine learning was not previously attempted in literature. We provide the first solution to the problem and present a strong baseline.
|
Detecting GBC from US images using Deep Neural Networks (DNNs) is challenging. US images often have low quality due to sensor issues, causing biases in DNNs and making it hard to pinpoint the gallbladder (GB) region accurately [5]. The handheld nature of the probe also means the views are not aligned, adding to the challenge. Malignant cases, unlike non-malignant ones with clear anatomy, are difficult to detect due to the lack of a distinct GB boundary or shape and the presence of masses. While there are recent efforts to circumvent the challenges of US for accurate GBC detection [5, 6, 8], these techniques are primarily image-based. Due to the challenges discussed earlier, single images may lack unambiguous features for malignancy detection. We also observe in our experiments that the image-centric methods do not generalize well to unseen datasets. In response, we argue in favor of a paradigm shift to video-based GBC detection from US. Notably, video-based GBC detection from US has not been attempted in the literature.
|
This study addresses the limitations of current US image-based GBC detection techniques, emphasizing the need for a paradigm shift towards US video-based approaches. Our novel design, named FocusMAE, strategically biases masking token selection from high-information regions and learns quality representations of GB malignancy. FocusMAE achieves state-of-the-art results on US video-based GBC detection.
|
Despite the above studies, we observe a notable gap in the literature regarding models for video-based GBC detection from US videos. This gap in research motivates the current work.
|
D
|
Based on our theoretical analysis, we can construct the practical reflection vector and the hybrid beamforming matrices by projecting the near-optimal ones into the modulus constraints (i.e., just taking the phases of the complex values). Via simulations, we demonstrate that the proposed method can outperform the SOTA method in [13] for large-but-finite mmWave SU-MIMO systems. Furthermore, combining with the channel estimation method in [19], it is shown that the proposed method is robust to the channel estimation errors, almost achieving the ideal performances while having a lower training overhead. Due to its attractive performance and lower-complexity, our construction would be a good candidate for mmWave MIMO systems.
|
In this section, we provide the theoretical analysis of the proposed reflection vector and the hybrid beamforming, by providing the proofs of the key lemmas.
|
We compare the proposed reflection vector in (24) with that in the state-of-the-art (SOTA) method in [13]. In fact, the reflection vector in [13] is identical to our limit 𝐯limitsubscript𝐯limit{\bf v}_{\rm limit}bold_v start_POSTSUBSCRIPT roman_limit end_POSTSUBSCRIPT in (22). While both methods have identical reflection vector in the large-system limit, they are completely different in practical large-but-finite IRS-aided MIMO systems. Via experiments in Section V, it is demonstrated that the proposed reflection vector can outperform the SOTA in [13], especially due to the use of a better reflection vector. Noticeably, the proposed 𝐯^⋆superscript^𝐯⋆\hat{{\bf v}}^{\star}over^ start_ARG bold_v end_ARG start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT is indeed practical as it can be constructed only using the effective channel in (4). In contrast, to construct the reflection vector in [13] (i.e., 𝐯limitsubscript𝐯limit{\bf v}_{\rm limit}bold_v start_POSTSUBSCRIPT roman_limit end_POSTSUBSCRIPT in (22)), the TX-IRS and IRS-RX channels should be estimated. More seriously, it is intractable to recover the UPA steering vectors in (22) from these channel matrices. We for the first time derive the near-optimal and practical design of the reflection vector and hybrid beamforming for IRS-aided mmWave MIMO systems.
|
The remaining part of this paper is organized as follows. In Section II, we describe the IRS-aided SU-MIMO systems and define their wireless channel models. We state the main results of this paper in Section III, by providing the asymptotically near-optimal reflection vector and hybrid beamforming matrices. Theoretical analysis is conducted in Section IV. Section V provides the simulation results and Section VII concludes the paper.
|
As investigated in the existing works [15, 16, 17, 18, 19], there are efficient methods to estimate the so-called effective (or cascaded) channel. Focusing on the SU-MIMO systems with multiple antennas at the RX, which is the system model considered in this paper, the effective channels in the literature are categorized into two types. In [15, 16], the effective channel is defined as the Kronecker product of the TX-IRS and IRS-RX channels. Due to its large-dimension, this effective channel might not be suitable for the joint optimization of the reflection vector and the hybrid beamforming. Very recently in [19], the effective channel is defined in a more compact form (see Section II for details), thereby being more adequate for the joint optimization. Also, the channel estimation method, proposed in [19] by means of a collaborative low-rank approximation, can yield the best estimation accuracy, while having a lower training overhead. Motivated by this, we in this paper study the joint optimization for IRS-aided mmWave MIMO systems, only using the effective channel in [19]. Toward this, our major contributions are summarized as follows.
|
C
|
To leverage the potential of H-MIMO which will mainly operate in the near field, the fundamental aspects of near-field EM wave propagation need to be well understood. To this end, accurate near-field channel modeling will facilitate unveiling the fundamental limits of wireless operations in this region, but will also enable efficient H-MIMO system designs. The dense and large-size characteristics of H-MIMO bring new challenges in near-field channel modeling that need to be adequately addressed. In this article, we aim to provide a panoramic reference to the near-field H-MIMO channel modeling for both industry and academia, introducing efficient EM-domain channel models for the emerging H-MIMO paradigm. To the best of our knowledge, this is the first article contributing such a comprehensive overview to the area. Apart from a straightforward survey of existing works, we discuss the latest highly-organized near-field channel modeling categories and present an in-depth generalization of their distinctive features, modeling challenges, and evaluation criteria, as well as a list of key research directions on the topic.
|
To leverage the potential of H-MIMO which will mainly operate in the near field, the fundamental aspects of near-field EM wave propagation need to be well understood. To this end, accurate near-field channel modeling will facilitate unveiling the fundamental limits of wireless operations in this region, but will also enable efficient H-MIMO system designs. The dense and large-size characteristics of H-MIMO bring new challenges in near-field channel modeling that need to be adequately addressed. In this article, we aim to provide a panoramic reference to the near-field H-MIMO channel modeling for both industry and academia, introducing efficient EM-domain channel models for the emerging H-MIMO paradigm. To the best of our knowledge, this is the first article contributing such a comprehensive overview to the area. Apart from a straightforward survey of existing works, we discuss the latest highly-organized near-field channel modeling categories and present an in-depth generalization of their distinctive features, modeling challenges, and evaluation criteria, as well as a list of key research directions on the topic.
|
In this section, we present channel modeling aspects for near-field H-MIMO systems. We commence by describing the main features and challenges with near-field H-MIMO channel modeling, and then, discuss evaluation criteria for such channel models. We also overview several existing near-field H-MIMO channel models.
|
Recent advances on metasurface-based antenna apertures enable H-MIMO. Such apertures can be active or passive, transmissive or reflective, and can be fed by waveguide structures or external sources [1]. In the following, we present the most significant categories of H-MIMO channel modeling.
|
In this section, we present the recent near-field H-MIMO LoS channel models of [6, 7], which are relevant to the H-MIMO system in Fig. 2(e) and require lower computational and measurement complexities than the available integral form TGF-based
|
C
|
We exploit the potential of DL estimates with the regularization provided by the physical knowledge of acoustic wave propagation.
|
Different from considering only the RIRs, we consider speech signals in real environments, with a relevant step towards practical application scenarios.
|
We validated the proposed approach considering a real dataset and considering different subsets of available observations.
|
Moreover, due to the capacity limitations of the GPU memory, we consider signals with N=800𝑁800N=800italic_N = 800 samples, and we recover the acoustic fields for different randomly selected windows of the speech signals.
|
The promising results of the devised method prove how PINN represents an appealing solution to generalize the reconstruction of the sound field towards practical scenarios, thanks to the advantages of the acoustic physical priors and the potential of DL strategies to infer representations from real data.
|
A
|
\boldsymbol{\eta}bold_z , bold_italic_β , bold_italic_α , and bold_italic_η are the vectors, which collect variables zi(q),βu,αu,andηu,superscriptsubscript𝑧𝑖𝑞subscript𝛽𝑢subscript𝛼𝑢andsubscript𝜂𝑢z_{i}^{(q)},~{}\beta_{u},~{}\alpha_{u},~{}\text{and}~{}\eta_{u},italic_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_q ) end_POSTSUPERSCRIPT , italic_β start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT , italic_α start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT , and italic_η start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT , respectively.
|
The NW-MMF JUBD problem, maximizes the minimum SE of the UEs, which is equivalent to maximizing the minimum SINR of the UEs within the entire network. In this way, the NW-MMF JUBD problem can be formulated as
|
In this work, we formulated the joint user association and beamforming weights design (JUBD) problem for three objective functions (WSR, NW-PF, and NW-MMF). Since the formulated problems were non-convex, we employed reformulations and approximations to convert the problem to equivalent tractable SCA-based forms. Simulation results provided a comprehensive comparison between the statistical behavior of SE in different scenarios. The SE performance results under all three objective functions indicate that integrating HAPS with terrestrial networks improves the performance of the network. In addition, using NW-PF as the objective problem provides a good trade-off between achieved sum SE and minimum SE.
|
According to Fig. 2(b), WSR algorithm provides the highest sum SE. In addition, according to Fig. 2(c), the NW-MMF objective function provides the highest minimum SE. Furthermore, NW-PF can be considered as a trade-off between sum SE and minimum SE. Specifically, NW-PF provides a higher sum SE compared to NW-MMF and provides a higher minimum SE compared to WSR scenario. Therefore, while choosing proper objective function for JUBD Algorithm 1 in vHetNet depends on the specific requirement of the network (improving worst UE performance, achieve higher sum data rate, or providing fairness), it can be concluded from the results in Fig. 2 that using NW-PF objective function can provide a good sum SE aligned with acceptable performance for the worst UE.
|
Adopting the logarithm utility function for the NW-PF objective function, the preliminary formulated NW-PF JUBD problem can be expressed as
|
D
|
The rain generator is trained for 200 epochs on Rain100L and Rain100L-S, and 400 epochs on Rain100H and Rain100H-S. The other training settings are those used in Sec. V-A. The augmentation rate is set as 0.5 for Rain100L, Rain100L-S and Rain100H-S, and 1% for Rain100H.
|
To further validate the diversity of the samples generated by rain generators in the paired manner, we test the generalization performance of models trained on four augmented datasets. The performance on real rainy images of SPA-Data is shown in Table IV. In this case, the PSNR and SSIM with data augmentation by VRG-Net gain an unsubstantial advantage over the baseline in most cases. By comparison, the performance with data augmentation of TRG-Net still achieves a consistently obvious improvement over the baseline in most test cases. These results demonstrate that the samples generated by our method are diverse, which can improve the deraining performance not only in in-distribution but also in out-of-distribution tasks.
|
Table III provides the deraining results of all competing methods without and with data augmentation, on four datasets. “Baseline” denotes the performance of the derainers trained on the original data set without data augmentation here.
|
TABLE III: The deraining results on synthetic datasets. Baseline means the derainers trained on the original dataset without augmentation. VRG-Net and TRG-Net denote augmented training using VRG-Net and the proposed TRG-Net, respectively.
|
TABLE II: The quantitative results of all competing methods on synthetic and real datasets. A∗ indicates the deraining results of PReNet [30] trained on the pseudo-paired data generated by method A. The best result is highlighted with bold.
|
B
|
Table 7: Performance of the proposed approach compared to the vanilla multimodal transformer on test set. Valence and arousal for the Affwild2 dataset and accuracy for the Biovid dataset. I3D + ResNet18 backbones are used for the Affwild2 dataset and R3D + 1D CNN are used for the BioVid dataset. Default training/validation split is used in the Affwild2 dataset and 5-fold cross validation is performed on the BioVid dataset.
|
Table 7 shows the results of the proposed method with and without the joint representation. On the Biovid dataset, the joint multimodal transformer improves by 1.3% over the vanilla multimodal transformer.
|
For the modality fusion, the two feature vectors and the joint representation are fed to the joint transformer block, as shown in Figure 3. The FC layers are removed from both backbones that were added in the backbone training phase. 512-dimensional feature vectors from visual and physiological backbones are obtained and fed to the joint multimodal transformer module. The backbones are frozen, and the joint transformer block is optimized using the ADAM optimizer with a learning rate of 5×10−65superscript1065\times 10^{-6}5 × 10 start_POSTSUPERSCRIPT - 6 end_POSTSUPERSCRIPT, and the batch size is set to 128.
|
All the aforementioned transformer-based fusion architectures primarily focus on intermodality correlation. In contrast, in addition to modeling the intermodality relationships to capture the complementarity between modalities, the proposed method explicitly feeds the joint (combined) features to the multimodal transformer to introduce redundancy. By incorporating this third joint representation branch, the proposed model can access enhanced contextual information that cross-attention might only partially capture. Doing this improves the model’s understanding of complex relationships between the input sequences. Further, the proposed method becomes more robust to noise or irrelevant information present in individual sequences. This third joint representation allows the model to dynamically focus on this newly introduced information in sequences where both modalities are simultaneously noisy. This helps mitigate the sensitivity of cross-attention to noisy inputs and improves the system’s overall performance.
|
Multimodal emotion recognition systems outperform their unimodal counterparts, especially in the wild environment. The missing and noisy modality is a prevalent issue with in-the-wild emotion recognition systems. Many attention-based methods have been proposed in the literature to overcome this problem. These methods aim to weigh the modalities dynamically. This paper introduces a joint multimodal transformer for emotional recognition. This transformer-based architecture introduces a joint feature representation to add more redundancy and complementary between audio and visual data. The two modalities are first encoded using separate backbones to extract intra-modal spatiotemporal dependencies. The feature vectors of the two modalities are joint and the joint feature vector is also fed into the Joint Multimodal Transformer module. This joint representation provides more fine-grained information about the inter-modal association between the two modalities. The proposed model outperforms state-of-the-art methods on the Biovid dataset and improves over the vanilla multimodal transformer by 6% on the Affwild2 dataset. Our future work includes introducing more modalities and sophisticated backbones for effective feature extraction.
|
D
|
In this work, we introduce Lodge, a two-stage coarse-to-fine diffusion network, and propose characteristic dance primitives as intermediate-level representations for the two diffusion models.
|
can produce human motions that interact with 3D scenes while avoiding collisions. CALM [45] and ASE [35] introduce reinforcement learning and physical simulation environments to enhance the physical realism of generated movements.
|
Choreography Rules. Based on suggestions from professional choreographers and existing literature[1, 44, 4, 5], we want to generate long-duration dances that obeyed these three basic choreographic rules:
|
We introduce a coarse-to-fine diffusion framework that can produce long dances in a parallel manner. Our method is capable of learning the overall choreographic patterns while ensuring the quality of local movements.
|
Our generated samples demonstrate that Lodge can parallelly generate dances that conform to choreographic rules while preserving local details and physical realism.
|
D
|
Similar to the Frechet Inception Distance (FID) [13], the FPD measures the 2-Wasserstein distance between genuine and synthetic Gaussian distributions within the model-derived feature spaces. On the other hand, the JSD represents a symmetric and smoothed adaptation of the Kullback-Leibler divergence [14], enabling the assessment of similarity between two probability distributions [12].
|
Since the topological prior is concatenated with the prior latent matrix in the training process, point cloud generation in the evaluation phase should be different from the SP-GAN paper.
|
The last part of the prior latent matrix presents a topological prior. Such prior consists of the centroids of a point cloud from the repository, which is also the one fed into the discriminator.
|
This work utilizes centroids from 16, 32, 64, and 128 clusters, and also the original point cloud as the topological prior. When a reference point cloud is concatenated with the prior latent matrix, it can be regarded as adding centroids of 2048 clusters. The overall results are shown in Table 1. The unit of FPD is 10−3superscript10310^{-3}10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT.
|
As shown in Fig. 1, the prior latent matrix consists of three components: the initial global state represented by a point cloud sphere, a set of local priors, and the centroids of a reference point cloud from the repository.
|
C
|
The segmented network tariff with three power levels is illustrated in fig. 2 for three time steps. In this illustration, the power-related network costs for time step 2 amount to ∑s=0,1,2λsp2,ssubscript𝑠012subscript𝜆𝑠subscript𝑝2𝑠\sum_{s={0,1,2}}\lambda_{s}p_{2,s}∑ start_POSTSUBSCRIPT italic_s = 0 , 1 , 2 end_POSTSUBSCRIPT italic_λ start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT italic_p start_POSTSUBSCRIPT 2 , italic_s end_POSTSUBSCRIPT, where p2,ssubscript𝑝2𝑠p_{2,s}italic_p start_POSTSUBSCRIPT 2 , italic_s end_POSTSUBSCRIPT represents the power utilized within the p¯ssubscript¯𝑝𝑠\bar{p}_{s}over¯ start_ARG italic_p end_ARG start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT segment during time step 2, and λssubscript𝜆𝑠\lambda_{s}italic_λ start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT signifies the network price (€/kWh) assigned to that specific power segment. In this model, λ0subscript𝜆0\lambda_{0}italic_λ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT represents a base volumetric fee that applies even to low power consumption. It may be set to zero.
|
This paper considers the impact of pricing incentives on EV charging. For the purpose of this paper, we restrict the analysis to EV chargers that have their own grid connection, which does not need to be shared with other loads or injections. In this context, an EV user’s total charging costs consist of energy and network costs. Energy costs refer to the actual energy charged, while the network costs cover both Transmission System Operator (TSO) and DSO costs that these operators incur to maintain and operate the network infrastructures [22].
|
A network tariff, with or without segmentation, may also contain a consumption-independent (fixed) component. The magnitude of this component is important for cost recovery of the network owner/operator [21]. However, it is not considered for the analysis in this paper because it does not impact the charging schedule.
|
Comparing the cases of segmented network tariffs with and without flat energy prices, it is clear that the power levels are the primary determinant affecting aggregate power consumption, and the price level for the middle capacity band has a lesser impact. It should be noted that in all cases, the price level for the upper capacity band exceeds the dynamic energy prices.
|
The work in [12] explores the concept of power-based distribution tariffs for distribution system operators by charging customers based on peak power usage. It highlights the benefits of using power-based tariffs to incentivize customers to reduce peak demand. [13] discusses the transition towards power-based tariffs. It proposes alternative, more cost-reflective tariff structures like the power tariff, threshold power tariff, power limit tariff (also known as power band tariff), and step tariff. The power tariff consists of three cost components: basic charge (in €/month), energy charge (in €/kWh), and power charge (in €/kW) based on peak power (i.e. the highest or the there highest hourly power of the month). The threshold power tariff has a similar cost component but applies the power charge only when consumption exceeds a predefined threshold. This tariff structure and its implications for reducing peak demand and ensuring Distribution System Operators cost recovery have been further examined in studies in [14, 15, 16]. The power limit tariff simplifies to a single power charge, where consumers pre-select a maximum power level and are penalized for exceeding this limit, a concept aligning with capacity subscription tariffs discussed in the work of [17, 18, 19]. Conversely, the step tariff uses a basic charge (in €/month) and a consumption charge (in €/kWh). If the average power remains within a certain predefined limit, then the charge is low; otherwise, the charge is very high. Another type of power-based tariff - segmented tariff, is proposed in [20]; it uses a consumption charge (in €/kWh), where it assigns a tariff to each power threshold, the higher the threshold is, the higher the tariff is. The results show that this method can efficiently flatten the aggregate load profile in the case of residential users with energy storage. Further [21] shows how a multi-level segmented network tariff can do a better job in flattening peaks along with cost recovery for DSOs.
|
B
|
Industrial manufacturing processes, especially in the chemical and pharmaceutical industries, are characterized by high system order with a high degree of correlation between the different process values. These processes typically have distinct process phases, each with specific operating parameters and goals. However, access to real-world benchmark data sets from these industries is challenging due to their complex nature and confidentiality concerns. Considering these premises, the following comparative experiments will utilize a laboratory batch process installed in our lab for mixing and blending pharmaceutical pre-products.
|
For the following comparative evaluation, both the UM transformation and the WT segmentation algorithms were executed using a watershed margin of ϕ=10italic-ϕ10\phi=10italic_ϕ = 10’. The resulting UM with the identified clusters using the standard SOM algorithm is shown as a 3D and 2D plot in Fig. 3 (a). The standard SOM algorithm was able to identify three distinct process phases. This result contradicts the installed process flow control, with five distinct process phases implemented. In this simple example of the laboratory process, these phases can be read visually: by examining the time courses of the valve position (Z) in relation to the pump speed (D) in Fig. 2 (a), five dedicated combinations can be read for each batch. In contrast, as shown in Fig. 3 (b), the HULS concept successfully and reliably identifies the five process phases. When comparing the obtained results, the efficiency of the developed HULS concept becomes apparent. Even under challenging conditions, such as strong correlation between the process variables and the presence of an unbalanced learning dataset, the HULS method is able to reliable identify the process phases. Another aspect related to ETsubscript𝐸𝑇E_{T}italic_E start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT is the efficiency how the intrinsic structure of the data is mapped in the model. A direct comparison of the two UM visualizations in Fig. 3 shows that the HULS model is more compact, i.e. fewer neurons are needed to store the data structure.
|
Fig. 1 shows the flow chart the considered laboratory batch process. The simplified process includes two liquid tanks, with tank B02 functioning as a supply reservoir for the process liquid. The outlet of tank B02 is equipped with a pressure sensor (P) for estimating the fill level, a flow meter (F), and an electro-pneumatic valve (Z) for regulating the flow rate of the liquids feed into the reaction tank B01. The B01 fill level is measured by an ultrasonic level gauge (L). Once B02 is filled to a defined level and after a product-specific reaction time, the liquid is transferred to a succeeding process unit using pump P01 with speed control (D) to adjust the flow rate. It should be noted that for the experiments, to illustrate a batch process behavior, the pre-product is returned to the supply tank B02. The sensors and actuators of the process are connected via an industrial fieldbus, which enables real-time data acquisition.
|
Anomaly detection is a crucial aspect of a comprehensive monitoring system and involves identifying atypical patterns, events, or behaviors that significantly deviate from normal behaviour. For the comparative evaluation of the anomaly detection performance, another batch sequence was recorded. As shown in Fig. 4(a) the sequence contains six batches, of which the three sequences E1, E2 and E3 deviate from the normal behavior. In batch E1 the venting of reservoir B02 (cf. Fig. 1) has been reduced, in batch E2 the flow cross-section has been decreased by valve V02 and in batch E3 the level gauge position in tank B01 has been shifted. Batches N3 to N5 reflect the normal behavior of the process. In Fig. 4 (b) and (c) the time courses of the Euclidean distance ei𝐯∗subscript𝑒𝑖superscript𝐯e_{i\mathbf{v^{*}}}italic_e start_POSTSUBSCRIPT italic_i bold_v start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT as well as the identified process phases cisubscript𝑐𝑖c_{i}italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are visualized. Focusing on batches N3, N4, and N5, where normal behavior is expected, shows that the standard SOM model is very sensitive to new, unseen data, resulting in a high ei𝐯∗subscript𝑒𝑖superscript𝐯e_{i\mathbf{v^{*}}}italic_e start_POSTSUBSCRIPT italic_i bold_v start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT. This phenomenon, already described in section 4.3, is problematic with respect to a real industrial application in the sense that false alarms can be generated (e.g. batch N4).
|
Holistic process monitoring, especially in the pharmaceutical industry, involves several essential elements such as identifying unknown process phases, tracking their progression, determining the duration of each phase, and detecting any anomalies. A prerequisite for robust process phase monitoring is the reliable clustering of the training dataset. As described in section 2, the use of SOMs with UM transformation and WT segmentation represents a robust approach for unsupervised clustering. As already discussed, the considered laboratory can be taken representative for a wide range of production processes: strong correlation among the process values and distinct process phases with fast transitions between them. However, the SOM algorithm exhibits considerable sensitivity to the inherent attributes of the training dataset, potentially leading to an increased topological error ETsubscript𝐸𝑇E_{T}italic_E start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT, for example. As previously outlined, the ETsubscript𝐸𝑇E_{T}italic_E start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT measures the ability of the mapping algorithm to accurately model the topological dependencies within the dataset. As shown in Table 1, the proposed HULS procedure again significantly outperforms the conventional SOM model. Evidently, a high topological error ETsubscript𝐸𝑇E_{T}italic_E start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT may cause the UM transformation to cluster the data insufficiently. This effect can be verified by examining the UM transformation and the WT segmentation.
|
B
|
θ*=argminθ∈ℝnθη(θ).subscript𝜃subscript𝜃superscriptℝsubscript𝑛𝜃𝜂𝜃\displaystyle\theta_{*}=\arg\min_{\theta\in\mathbb{R}^{n_{\theta}}}\eta(\theta).italic_θ start_POSTSUBSCRIPT * end_POSTSUBSCRIPT = roman_arg roman_min start_POSTSUBSCRIPT italic_θ ∈ blackboard_R start_POSTSUPERSCRIPT italic_n start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT end_POSTSUPERSCRIPT end_POSTSUBSCRIPT italic_η ( italic_θ ) .
|
Notice that when expected cost minimization is approximated by empirical cost minimization (possibly with regularization) as in [32, 17, 2, 45, 18, 33, 29, 25, 26, 28], the surrogate objective function is the sum of the empirical cost and the regularizer, which has closed-form and is free of estimation error.
|
However, since the distribution of the environments is unknown, (3) cannot be solved directly. A typical practice is to approximate it by empirical cost minimization (with regularization), e.g.,
|
The problem is challenged by the fact that the objective function η𝜂\etaitalic_η is non-convex and can only be estimated by sampling over the environments and the initial states in general. As stated in Assumption 2.1, the environments at training and testing follow an unknown distribution. The estimation error is the difference between the true value of η𝜂\etaitalic_η and the empirical average of the normalized cost, and the distribution of the estimation error is unknown and non-Gaussian in general.
|
As the distribution of the environments is generally complicated or even unknown, it is challenging, if not impossible, to solve the expected cost minimization problem in closed form. Therefore, the methods, which target zero-shot generalization, instead solve an empirical mean minimization problem (possibly with regularization) given a finite amount of training environments. Related methods can be categorized into two classes. The first one is modifying an expected cost function and solving the modified problem through empirical cost minimization [15, 41, 32, 17, 18, 33]. For example, risk-sensitive criterion can be introduced to balance between a return and a risk, where the risk can be the variance of the return [32, 17]. Worst-case criterion is used to mitigate the effects of the variability induced by a given policy due to the stochastic nature of the unseen environments or the dynamic systems [18, 33].
|
B
|
With the aforementioned advantages, MA has garnered significant attention [7, 8, 9, 10, 11, 12, 13]. For instance, to further improve the channel capacity of multiple-input multiple-output (MIMO) system, authors in [7] proposed a new architecture incorporating MA, which demonstrates a substantial enhancement in communication performance compared to the conventional FPA system. Moreover, an MA-enabled multi-access channel for multi-user uplink transmission was investigated in [8], which leads to a noteworthy reduction in total transmit power as opposed to the FPA systems.
|
In light of the above, this paper investigates the MISO interference channel aided by MAs at transmitters, leveraging the inherent characteristics of the multi-path channel and the extra spatial DoFs it offers to reduce inter-cell interference. The integration of MA introduces a new DoF in system design, enabling both desired signal enhancement and interference mitigation. To this end, by jointly optimizing MA positions and transmit beamforming, we aim to minimize the total transmit power under the individual signal-to-interference-plus-noise ratio (SINR) requirement of each user, which is a highly coupled non-convex problem. To address this challenge, an efficient algorithm based on block coordinate descent (BCD) is proposed, where MA positions and beamforming vectors are optimized in an alternating manner. Specifically, with fixed MA positions, the optimal beamforming vectors are obtained by second-order cone program (SOCP). On the other hand, by fixing beamforming vectors and scaling SINR constraints with a meticulously designed auxiliary value, the MA positions can be updated iteratively via successive convex approximation (SCA). Simulation results validate that the proposed algorithm can be effectively utilized in the MISO interference network, provided a certain region size for antennas moving is available. Consequently, the performance of spectrum sharing in interference network is dramatically improved. The MA system with simple beamforming, e.g., maximum ratio transmission (MRT), performs only slightly worse than that with complex beamforming and significantly better than the FPA system. Moreover, the number of antennas required for MA-aided interference network is drastically reduced, thus enabling the simplification of transmitter design.
|
In this paper, we investigated the MA-enabled MISO interference channel system, where each transmitter is equipped with N𝑁Nitalic_N MAs. By leveraging the additional design DoF provided by MA, we formulated an optimization problem for minimizing the total transmit power of interference network by jointly optimizing the MA positions and transmit beamforming. Since the resultant problem is highly coupled and non-convex, we proposed an alternating optimization algorithm based on the BCD method, where the optimization variables are iteratively updated by introducing the well-designed auxiliary value and invoking the SOCP and SCA techniques. Furthermore, numerical results were provided to clarify that the MA-aided interference network increases the number of cells that can be held and enables the simplification of transmitter design by moving antennas properly within a small region of several-wavelength size.
|
In Fig. 4, we compare the total transmit powers of different schemes versus the numbers of antennas, where the parameters are set to K=2𝐾2K=2italic_K = 2, L=10𝐿10L=10italic_L = 10, and A=4λ𝐴4𝜆A=4\lambdaitalic_A = 4 italic_λ. As can be observed, the total transmit power decreases as the number of antennas increases for all schemes. The superiority of MA schemes over the others, regardless of the type of beamforming employed, becomes evident. This can be attributed to the substantial reduction of the correlation among channel vectors caused by the positioning optimization of MAs, which facilitates effective mitigation of interference among different transmitters and consequently leads to a decrease in total transmit power. It is noted that the utility of SOCP-based beamforming can be approximated by implementing simple MRT method in the MA system, which means that we can drastically reduce the complexity of transmit beamforming with negligible additional power. Besides, the MA scheme is also capable of effectively reducing the number of antennas required by more than half compared with FPA scheme while maintaining the same power constraint and communication metrics. For instance, only 4444 antennas are utilized in “MA MRT” scheme to achieve the performance of “FPA SOCP” scheme that deploys 9999 antennas, which simplifies the transmitter design.
|
First, Fig. 2 shows the total transmit powers versus the numbers of channel paths with different numbers of transmitter-user pairs K𝐾Kitalic_K, where the parameters are set to A=4λ𝐴4𝜆A=4\lambdaitalic_A = 4 italic_λ and N=4𝑁4N=4italic_N = 4. It is observed that the powers of all schemes decrease with L𝐿Litalic_L and the proposed algorithm outperforms FPA scheme for any K𝐾Kitalic_K due to the interference mitigation gain provided by MA positioning optimization. Besides, the decreasing transmit power of FPA system is not caused by the increasing average channel gain (normalized by L𝐿Litalic_L and therefore a constant) but due to the reduced interference. As the number of channel paths for each transmitter-user pair increases, the spatial diversity of MA is enhanced by leveraging the prominent channel variation, which decreases the correlation among channel vectors. However, as L𝐿Litalic_L is increased larger than 5555, the descent rate for the total transmit power becomes small because of the fact that the channel correlation is constrained by the numbers of elevation and azimuth angles at transmitters. Specifically, according to the channel model in (1), if the total numbers of angles are limited, the FRMs of multiple channels are likely to have similar row vectors. Thus, the local movement of MAs cannot further bring a significant reduction in channel correlation. The result demonstrates that due to the strong ability of desired signal enhancement and interference suppression of MA, the total transmit power of the proposed algorithm with 5555 transmitter-user pairs is even lower than that of FPA system with 3333 pairs, which indicates that the MA-aided interference network can accommodate more cells without incurring any increase in total transmit power.
|
A
|
For audio modality, the speech signals are extracted from the corresponding videos with a sampling rate of 16KHz. The log melspectrograms are then obtained using the preprocessing code provided by the Vggish repository111https://github.com/harritaylor/torchvggish. To ensure that the audio modality is properly synchronized with the sub-sequences of other modalities, we have used a hop length of 1/fps1𝑓𝑝𝑠1/fps1 / italic_f italic_p italic_s of the raw videos to extract the spectrograms.
|
For visual modality, random flipping, and random crop with a size of 40404040 are used for data augmentation in training, while only center crop is used for validation. For audio and visual features, the input data is normalized in order to have a mean and standard deviation of 0.50.50.50.5. For text modality, the BERT features are normalized to ensure the mean as 00 and standard deviation as 1111. Adam optimizer is used with a weight decay of 0.0010.0010.0010.001 and batch size is set to be 12121212. The models are trained separately for valence and arousal. The maximum number of epochs is set to be 100100100100 and early stopping is employed to avoid over-fitting. The hyperparameters of the initial learning rate and minimum learning rate are set to be 1e−51𝑒51e-51 italic_e - 5 and 1e−81𝑒81e-81 italic_e - 8 respectively. In our training strategy, we have deployed a warm-up scheme using ReduceLROnPlateau𝑅𝑒𝑑𝑢𝑐𝑒𝐿𝑅𝑂𝑛𝑃𝑙𝑎𝑡𝑒𝑎𝑢ReduceLROnPlateauitalic_R italic_e italic_d italic_u italic_c italic_e italic_L italic_R italic_O italic_n italic_P italic_l italic_a italic_t italic_e italic_a italic_u scheduler with patience of 5555 and a factor of 0.10.10.10.1 based on the CCC score of the validation partition. It has been shown that gradual training of the backbones of individual modalities along with the fusion model by gradually fine-tuning the layers of the backbones helps to improve the performance of the system [42]. Therefore, we have deployed a similar strategy in our training framework, where three groups of layers for visual (Resnet-50) and audio (VGG) backbones are progressively selected for fine-tuning. Initially at epoch 00, the first group is unfrozen and the learning rate is linearly warmed up to 1e−51𝑒51e-51 italic_e - 5 within an epoch. Then repetitive warm-up is employed until epoch 5555, after which ReduceLROnPlateau𝑅𝑒𝑑𝑢𝑐𝑒𝐿𝑅𝑂𝑛𝑃𝑙𝑎𝑡𝑒𝑎𝑢ReduceLROnPlateauitalic_R italic_e italic_d italic_u italic_c italic_e italic_L italic_R italic_O italic_n italic_P italic_l italic_a italic_t italic_e italic_a italic_u is used to update the learning rate. The learning rate is gradually dropped with a factor of 0.10.10.10.1 until validation CCC does not improve over 5 consecutive epochs. After which the second group is unfrozen and the learning rate is reset to 1e−51𝑒51e-51 italic_e - 5, followed by a warm-up scheme with ReduceLROnPlateau𝑅𝑒𝑑𝑢𝑐𝑒𝐿𝑅𝑂𝑛𝑃𝑙𝑎𝑡𝑒𝑎𝑢ReduceLROnPlateauitalic_R italic_e italic_d italic_u italic_c italic_e italic_L italic_R italic_O italic_n italic_P italic_l italic_a italic_t italic_e italic_a italic_u. The procedure is repeated till all the layers are fine-tuned for audio and visual backbones. Also, note that the best model state dictionary over prior epochs is loaded at the end of each epoch to mitigate the issues of over-fitting. To further control the problem of over-fitting, we have employed cross-validation with 6 folds, where the fold 00 partition is the same as the original partition provided by the organizers [60]. The results obtained from the 6-fold cross-validation are shown in Table 1. In all these experiments, we have used 3 iterations in the fusion model (i.e., l𝑙litalic_l=3).
|
For the text modality, the extracted speech signals from audio preprocessing are fed to the pretrained speech recognition model of Vosk toolkit222https://alphacephei.com/vosk/models/vosk-model-en-us-0.22.zip to obtain the recognized words along with word-level timestamps. Next, a pretrained punctuation restoration and capitalization model333https://pypi.org/project/deepmultilingualpunctuation/ is used to restore the punctuations of the recognized words, which carries semantic information pertinent to emotional states. Now BERT features are extracted at word-level using a pre-trained BERT model444https://pypi.org/project/pytorch-pretrained-bert/. The word-level features are computed by taking a summation of the last four layers of the BERT model [61]. The recognized words may usually span a larger time window of multiple frames. In order to synchronize the word-level BERT features of text modality with audio and visual modalities, the word-level text embedding is populated as per the timestamp of each word by reassigning the same word-level feature to all the frames within the time span of the corresponding word.
|
Text modality is another predominantly explored modality for emotion detection, which carries semantic emotion-relevant information in the text data [50]. Effectively leveraging the textual data can boost the performance of multimodal fusion as they can offer significant emotion-relevant information and complement audio and visual modalities. Based on transformers, BERT features are predominantly explored text encoders for emotion recognition in the literature [51]. Therefore, we also used BERT as text encoder, followed by TCNs to encode the temporal information across the word embeddings.
|
For audio modality, the speech signals are extracted from the corresponding videos with a sampling rate of 16KHz. The log melspectrograms are then obtained using the preprocessing code provided by the Vggish repository111https://github.com/harritaylor/torchvggish. To ensure that the audio modality is properly synchronized with the sub-sequences of other modalities, we have used a hop length of 1/fps1𝑓𝑝𝑠1/fps1 / italic_f italic_p italic_s of the raw videos to extract the spectrograms.
|
B
|
Consider c¯=0¯𝑐0\bar{c}=0over¯ start_ARG italic_c end_ARG = 0. The first claim (31) of this Theorem then states that if the value of the linearized game with error is not positive (and thus x∈ℛδ∗𝑥subscriptℛsuperscript𝛿x\in\mathcal{R}_{\delta^{*}}italic_x ∈ caligraphic_R start_POSTSUBSCRIPT italic_δ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT), then the value of the true game will also be not positive (and thus x∈ℛ𝑥ℛx\in\mathcal{R}italic_x ∈ caligraphic_R). For the second claim (32), consider any c¯=ϵ¯𝑐italic-ϵ\bar{c}=\epsilonover¯ start_ARG italic_c end_ARG = italic_ϵ for ϵ>0italic-ϵ0\epsilon>0italic_ϵ > 0, then (32) implies that if the value of the linearized system with error is greater than ϵitalic-ϵ\epsilonitalic_ϵ (and thus x∉ℛδ∗−𝑥superscriptsubscriptℛsuperscript𝛿x\notin\mathcal{R}_{\delta^{*}}^{-}italic_x ∉ caligraphic_R start_POSTSUBSCRIPT italic_δ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT), then the value of the true game will be greater than ϵitalic-ϵ\epsilonitalic_ϵ (and thus x∉ℛ−𝑥superscriptℛx\notin\mathcal{R}^{-}italic_x ∉ caligraphic_R start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT). As we will show in the subsequent corollaries, together these claims will lead to conclusions on the conservativeness of the backward reachable sets and the controllers generated from the linear game with antagonistic error. The proof of Thm. 3 follows.
|
The Reach and Avoid proofs are similar, hence, only the Reach is shown for brevity. In Thm. 3, let c¯=0¯𝑐0\bar{c}=0over¯ start_ARG italic_c end_ARG = 0 and then by definition (12), 𝒮¯(𝒯,t)¯𝒮𝒯𝑡\bar{\mathcal{S}}(\mathcal{T},t)over¯ start_ARG caligraphic_S end_ARG ( caligraphic_T , italic_t ) satisfies the hypothesis (30). It then follows from the game value property (9),
|
Therefore, we may know that with a sufficient 𝒮^c¯subscript^𝒮¯𝑐\hat{\mathcal{S}}_{\bar{c}}over^ start_ARG caligraphic_S end_ARG start_POSTSUBSCRIPT over¯ start_ARG italic_c end_ARG end_POSTSUBSCRIPT then we may have a conservative solution w.r.t. the true value, implying the following more practical result.
|
Both Reach and Avoid proofs follow from construction of a specific strategy for the antagonistic error; we show the Avoid only. Namely, we prove the following contrapositive statement that, for any c≤c¯𝑐¯𝑐c\leq\bar{c}italic_c ≤ over¯ start_ARG italic_c end_ARG, we have
|
](x,u,d,\tau)\|italic_δ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ≜ roman_max start_POSTSUBSCRIPT over^ start_ARG caligraphic_S end_ARG start_POSTSUBSCRIPT over¯ start_ARG italic_c end_ARG end_POSTSUBSCRIPT × roman_Σ ( italic_t ) end_POSTSUBSCRIPT ∥ [ italic_f - roman_ℓ ] ( italic_x , italic_u , italic_d , italic_τ ) ∥. Then for any x∈𝒳𝑥𝒳x\in\mathcal{X}italic_x ∈ caligraphic_X and any c≤c¯𝑐¯𝑐c\leq\bar{c}italic_c ≤ over¯ start_ARG italic_c end_ARG,
|
C
|
Our problem considers two distinct sign patterns, indicated by X and Y, for different marginals while maintaining a convex objective.
|
First, we demonstrate that the optimizer has a closed-form solution under the feasibility assumption.
|
The convergence of the algorithm can be understood through the strong duality of the problem. We first substitute the optimizer (11) into the Lagrangian (10), obtaining
|
Throughout we follow the convention that log(0/0)=0000\log(0/0)=0roman_log ( 0 / 0 ) = 0. The objective function is strictly convex, and once the feasible set 𝒞𝒞\mathscr{C}script_C is non-empty, an optimizer P∗superscriptP\textbf{P}^{*}P start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT always exists. Additionally, a closed-form solution can be derived from the first-order optimality condition by employing the Lagrangian, i.e.,
|
for every marginal vector p(l)∈ℝ>0nsuperscriptp𝑙subscriptsuperscriptℝ𝑛absent0\textbf{p}^{(l)}\in\mathbb{R}^{n}_{>0}p start_POSTSUPERSCRIPT ( italic_l ) end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT > 0 end_POSTSUBSCRIPT. Similar to the standard Sinkhorn iteration in Algorithm 1, a closed-form solution of the optimizer can be derived, obtained by a gradient ascent method with, similarly, a linear convergence rate [23].
|
A
|
Our architectural investigation explores diverse state space models, including S4[15], Hippo[14], Hyena[48], and Mamba[13], alongside attention models such as DeIT[57], ViT[7], and Swin[33], as well as spectral models like GFNet[50], and AFNO[16] for sequence modeling. Additionally, we explore various channel modeling alternatives, including MLP, and Monarch Mixer, and introduce a novel technique called EinFFT. Through extensive experimentation, we have identified the most efficient and streamlined state space architecture to date, named SiMBA. This novel architecture incorporates Mamba for sequence modeling and introduces EinFFT as a new channel modeling technique. SiMBA effectively addresses the instability issues observed in Mamba when scaling to large networks. The architectural alternatives explored for large-scale sequence modeling are depicted in Figure-1. Table-1 provides an overview of large vision models used for image recognition tasks, categorizing them based on their sequence mixing and channel mixing techniques. It highlights a diverse range of models, including those based on convolutional models, transformers models, MLP-mixers, spectral-mixers models, and state space methods. Additionally, it introduces hybrid models combining convolution with transformers or spectral approaches. Lastly, it presents SiMBA, a novel model utilizing Mamba for sequence mixing and EinFFT for channel mixing.
|
The current instantiation of Mamba has stability issues i.e. the training loss is not converging while scaling to large-sized networks on the ImageNet dataset. It is not clear as to why Mamba has this instability while scaling to large networks. This leads to the problem of vanishing/exploding gradients commonly observed in Mamba in general. Existing literature, such as Oppenheim and Verghese’s work [43], establishes that linear state space models exhibit stability when all eigenvalues of matrix A are negative real numbers. This motivates the need for a stable Mamba architecture, as presented in this paper, and specifically, we use Fourier Transforms followed by a learnable layer with non-linearity. SiMBA also introduces residual connections with dropouts, which help in handling solving the instability issues in Mamba, as illustrated in Figure-1. This strategic application of learnable Fourier Transforms aims to manipulate the eigenvalues, ensuring they are negative real numbers. Thus, The proposed channel modeling technique, named EinFFT, is a distinctive contribution to the field. This is a unique contribution, as SSMs in the literature have not addressed channel modeling explicitly.
|
EinFFT: A new technique for channel modeling known as EinFFT is proposed, which solves the stability issue in Mamba. This uses Fourier transforms with non-linearity to model eigenvalues as negative real numbers, which solves instability [43]. We validate this technique on two data modalities time series and ImageNet dataset.
|
In this study, we introduce EinFFT, a novel approach for frequency-domain channel mixing utilizing Einstein Matrix multiplication. EinFFT is specifically designed for complex number representations of frequency components, enabling the effective capture of key patterns in Image patch data with a global view and energy compaction. It must be noted that EinFFT is also applicable for other sequence data modalities like time series or speech or even text data. We have validated EinFFT-based SiMBA for image and time series benchmark datasets.
|
This paper has proposed a new channel modeling technique EinFFT which solves the key stability problem in Mamba. We have also proposed SiMBA, a new architecture that leverages EinFFT for channel modeling and Mamba for sequence modeling. SiMBA allows for the exploration of various alternatives for sequence modeling like S4, long conv, Hyena, H3, RWKV, and even newer state space models. Importantly, SiMBA allows the exploration of alternatives for channel modeling like M2, and EinFFT as well as other possible spectral techniques. SiMBA also bridges the performance gap that most state space models have with state-of-art transformers on both vision and time series datasets. We plan to explore a few alternatives within the SiMBA framework such as long conv for sequence modeling with M2 or EinFFT for channel modeling.
|
B
|
This study contributes towards the enhancement of roadway work zone safety by offering a comprehensive set of reaction time metrics and benchmarks that are specifically designed for the unique environment of roadway work zones. These benchmarks are particularly relevant to the design of multimodal warning delivery systems, serving as a reference for developing real-time safety systems that leverage AR technology. This furthers our understanding of how to best utilize AR and multimodal warnings to enhance worker safety by reducing the reaction times of workers. Furthermore, we extend existing knowledge by exploring the use of VR as a simulation tool for AR-based warning systems, specifically in the context of safety research around reaction times. By drawing comparisons between the reaction time in VR-simulated AR and real-world AR scenarios, we enhance our understanding of VR simulation’s effectiveness in reaction time measurement. Additionally, this study introduces the use of vision-based pose tracking to assess workers’ reaction times, enhancing the overall understanding of occupational safety within roadway work zones. By tracking body movements in real-time, it’s possible to evaluate how quickly workers respond to safety alerts.
|
Meanwhile, understanding workers’ reaction time to safety warnings plays a vital role in the development of effective alert technologies. This significance is particularly accentuated in the context of roadway work zones, where the complex environment and the presence of fast-moving vehicles require a timely and rapid response from workers in case of intrusions [53]. To this end, several studies have been conducted to investigate workers’ reaction time in various systems and working environments related to roadway work zones. For example, Thapa et al. [54] examined the optimal configuration of a work zone intrusion alert technology and explored the relationship between sensor placement and alerting modules, considering workers’ naturalistic reactions. In another research work, Nnaji et al. [53] provided guidelines for the adoption of different commercially available work zone technologies for roadway work zones, taking into account the workers’ reaction time and response rate as essential metrics in their framework. In another study, Awolusi et al. [20] quantified the reaction time of roadway workers to two commercially available intrusion alert technologies specifically designed for roadway work zones. Finally, in a recent study, Yang et al. [55] conducted three experiments to assess the viability of using vibrotactile signals as warnings for road workers. The experiments aimed to assess the perception and performance of the generated signals at different body locations and to examine the usability of various warning strategies. Our review suggests that the existing literature does not provide sufficient evidence to provide insights into reaction times specifically related to AR-based warnings in this particular field.
|
This study contributes towards the enhancement of roadway work zone safety by offering a comprehensive set of reaction time metrics and benchmarks that are specifically designed for the unique environment of roadway work zones. These benchmarks are particularly relevant to the design of multimodal warning delivery systems, serving as a reference for developing real-time safety systems that leverage AR technology. This furthers our understanding of how to best utilize AR and multimodal warnings to enhance worker safety by reducing the reaction times of workers. Furthermore, we extend existing knowledge by exploring the use of VR as a simulation tool for AR-based warning systems, specifically in the context of safety research around reaction times. By drawing comparisons between the reaction time in VR-simulated AR and real-world AR scenarios, we enhance our understanding of VR simulation’s effectiveness in reaction time measurement. Additionally, this study introduces the use of vision-based pose tracking to assess workers’ reaction times, enhancing the overall understanding of occupational safety within roadway work zones. By tracking body movements in real-time, it’s possible to evaluate how quickly workers respond to safety alerts.
|
This non-intrusive system would monitor a worker’s normal operational movements, and upon issuing a safety alert, compare the latency period between the alert and the detected change in the worker’s movement pattern. Over time, these data can help determine average reaction times, identify individuals or situations where reaction times are slower than expected, and develop interventions to improve response rates. This could also assist in creating personalized training programs for workers with disabilities. This approach not only enhances the safety and well-being of workers, but also contributes to the overall efficiency and safety of roadway work zones.
|
Non-intrusive safety measures are gaining popularity across various domains, particularly in the context of safety assessment [92]. These platforms have opened opportunities for a wide range of applications, including evaluating worker safety [93, 94]. However, to the best of our knowledge, no prior study has specifically focused on assessing the reaction times of highway workers using vision-based mechanisms. The proposed vision-based approach offers valuable information on the reaction abilities of high-risk workers by enabling the continuous assessment and evaluation of their cognitive and physiological responses through computer vision. It captures and analyzes relevant kinematic data in real-time, providing an understanding of how workers react and adapt to various situations. Importantly, this non-intrusive method minimizes operational disruptions and facilitates the continuous collection and assessment of data, allowing for more natural and realistic evaluations. Therefore, the proposed approach provides the capacity to identify potential risks, improve safety protocols, and improve worker performance. It could also be used to create customized training programs and precise interventions for workers who might demonstrate compromised reaction capabilities in high-risk environments and scenarios.
|
C
|
In the future, we hope that RCIP can be combined with active preference learning [52, 53, 54] to better incorporate the human’s preferences in determining the appropriate level of robot autonomy (e.g. choosing from the valid set of RCIP parameters). We also plan to study RCIP’s ability to capture higher levels of interactivity in a system, such as when the robot must operate around more than one human, or when some humans are non-cooperative.
|
Statement of contributions. In this work, we introduce RCIP, a framework for measuring and calibrating risk in situations that involve interactions with humans with potentially ambiguous action choices. By reasoning about the human’s desired task outcome in the space of intents, we efficiently plan safe actions in the face of diverse, multi-modal human behavior, and ask for help when necessary. We make the following contributions: (1) We demonstrate how to use SRC to control the planning error rate across a set of model hyper-parameters, allowing flexible but provably safe levels of autonomy. (2) We prove theoretical guarantees for multi-dimensional risk control for both single-step and multi-step planning problems: with a set of K𝐾Kitalic_K user-specified risk budgets (α1,…,αK)subscript𝛼1…subscript𝛼𝐾(\alpha_{1},...,\alpha_{K})( italic_α start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_α start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT ) for different measures of risk (e.g., probability of failure and probability that the robot asks for help) and the robot performs the task correctly (with high probability) by asking for help if any of the K𝐾Kitalic_K risk budgets will be violated. (3) We evaluate RCIP in both simulation and hardware with a suite of human-robot interactive planning tasks with various styles of situational ambiguity (spatial, contextual, semantic). Experiments across multiple platforms and human uncertainty showcase the ability of RCIP to provide statistically guaranteed task success rates while providing more flexible autonomy levels than baseline approaches. RCIP reduces the amount of human help by 5−30%5percent305-30\%5 - 30 % versus baseline approaches.
|
We propose Risk-Calibrated Interactive Planning (RCIP), a framework that applies statistical multi-hypothesis risk control to address the problem of risk calibration for interactive robot tasks. We formalize RCIP as providing a statistical guarantee on an arbitrary number of user-specified risks, such as prediction failures and the amount of human help, subject to a bound on the rate at which the robot fails to predict the optimal actions. By optimizing preferences over a small number of model parameters, RCIP is able to achieve higher flexibility in aligning to user preferences than fixed-paramter methods. Experiments across a variety of simulated and hardware setups demonstrate that RCIP does not exceed user-specified risk levels. Moreover, RCIP reduces user help 8−87%8percent878-87\%8 - 87 % when compared to baseline approaches that lack formal assurances.
|
Our approach utilizes deep-learned human intent prediction models (e.g., [3, 4]) for understanding interactivity, and rigorously quantifies the uncertainty of these models in order to decide when to ask for help. As shown in Fig. 1 (middle), we produce a limited set of human intents based on the prediction model’s confidence scores. For each predicted intent, we plan a sequence of actions that satisfy an environment objective, such as placing the item in the correct bin. Depending on the robot’s confidence level and the human’s preferred level of autonomy, the robot can either take a risk or ask for help. To allow the human to specific different levels of robot autonomy (more or less confident predictions), we assume that the predictor has a small number of tunable model parameters (such as the temperature used in softmax scoring). We use a small calibration dataset of human-robot interactions to choose a set of valid parameters that provide a level of risk and autonomy set in advance by the user. By leveraging recent advances in distribution-free risk control [5], we show that the robot’s behavior can simultaneously limit several notions of risk. We formalize this challenge via two objectives: (i) statistical risk calibration (SRC): the robot should seek sufficient help from the human when necessary to ensure a statistically guaranteed level risk specified by the user, and (ii) flexible autonomy: the robot should ask for a minimal amount of help as specified by the user by narrowing down situational ambiguities through planning. We refer to these simultaneous objectives, with help from the human when necessary, as Risk-Calibrated Interactive Planning (RCIP).
|
Figure 2: RCIP formulates interactive planning as a multi-hypothesis risk control problem. Using a small set of calibration scenarios, RCIP computes step-wise prediction losses to form an aggregate emperical risk estimate. Using a risk limit, for each pair (λ,θ)𝜆𝜃(\lambda,\theta)( italic_λ , italic_θ ) of prediction thresholds and tunable model parameters, RCIP evaluates the hypothesis that the test set risk is above the limit. Thus, for all hypotheses that are rejected, the test set risk satisfies the threshold (with high probability).
|
B
|
Complexity analysis: The complexity of the complete algorithm for the active RIS power minimization mainly lies in solving the SDP problem (25) in each iteration. The dimensions of the input variables and the constraints on the SDP problem affect the computational complexity. Since the considered SDP problem has a (Q+1)×(Q+1)𝑄1𝑄1(Q+1)\times(Q+1)( italic_Q + 1 ) × ( italic_Q + 1 ) PSD matrix variable and (K+Q+1)𝐾𝑄1(K+Q+1)( italic_K + italic_Q + 1 ) PSD constraints, it usually takes 𝒪(Q+1)𝒪𝑄1\mathcal{O}\left(\sqrt{Q+1}\right)caligraphic_O ( square-root start_ARG italic_Q + 1 end_ARG ) iterations to decrease the dual gap of the desired accuracy, and the worst case complexity 𝒪((K+Q+1)4)𝒪superscript𝐾𝑄14\mathcal{O}\left((K+Q+1)^{4}\right)caligraphic_O ( ( italic_K + italic_Q + 1 ) start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ) for each iteration in the interior point method [30]. Therefore, the complexity of solving the SDP problem is 𝒪(Q+1(K+Q+1)4)𝒪𝑄1superscript𝐾𝑄14\mathcal{O}\left(\sqrt{Q+1}(K+Q+1)^{4}\right)caligraphic_O ( square-root start_ARG italic_Q + 1 end_ARG ( italic_K + italic_Q + 1 ) start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ) during an iteration. Consider the number of iterations to recover the rank-one solution is Ipsubscript𝐼pI_{\mathrm{p}}italic_I start_POSTSUBSCRIPT roman_p end_POSTSUBSCRIPT, the complexity of Algorithm 4 is 𝒪((Ip+1)Q+1(K+Q+1)4)𝒪subscript𝐼p1𝑄1superscript𝐾𝑄14\mathcal{O}\left((I_{\mathrm{p}}+1)\sqrt{Q+1}(K+Q+1)^{4}\right)caligraphic_O ( ( italic_I start_POSTSUBSCRIPT roman_p end_POSTSUBSCRIPT + 1 ) square-root start_ARG italic_Q + 1 end_ARG ( italic_K + italic_Q + 1 ) start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ), where the SDP problem has to be solved for one time at least. Moreover, to obtain the sparse reflect beamforming vector, the reweighted factor 𝜷𝜷\bm{\beta}bold_italic_β should be updated in Iresubscript𝐼reI_{\mathrm{re}}italic_I start_POSTSUBSCRIPT roman_re end_POSTSUBSCRIPT iterations. The computational complexity of the complete algorithm in Algorithm 5 is thus 𝒪(Ire(Ip+1)Q+1(K+Q+1)4)𝒪subscript𝐼resubscript𝐼p1𝑄1superscript𝐾𝑄14\mathcal{O}\left(I_{\mathrm{re}}(I_{\mathrm{p}}+1)\sqrt{Q+1}(K+Q+1)^{4}\right)caligraphic_O ( italic_I start_POSTSUBSCRIPT roman_re end_POSTSUBSCRIPT ( italic_I start_POSTSUBSCRIPT roman_p end_POSTSUBSCRIPT + 1 ) square-root start_ARG italic_Q + 1 end_ARG ( italic_K + italic_Q + 1 ) start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ).
|
In this study, we have investigated the sum-rate maximization and power minimization in active RIS-aided interference channels. For the sake of energy saving, we have proposed the power-aware sparse reflect beamforming designs on active RIS, which allow it to flexibly use its power budget by closing parts of the inefficient REs that suffer from the poor channel conditions on signal propagations. An important aspect of our research is to assess the energy-saving potential of these power-aware designs in comparison to traditional RIS designs.
|
This paper has investigated active RIS-aided interference channels where K𝐾Kitalic_K user pairs transmit in the same time over a common frequency band with the assistance of active RIS. We have studied how the maximum amplitude constraint on RE affects the capability of RIS mitigating interference by solving the interference power minimization problem. Furthermore, we have considered the power-aware design for active RIS whose power consumption mainly depends on the number of activated REs. Based on this model, we have maximized the sum rate of the interference channel system subject to the maximum amplitude and the power budget constraints, and have also minimized the active RIS power consumption subject to the maximum amplitude and the minimum rate requirements. The sparse reflect beamforming vector solution to these problems has been obtained with the iterative ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-norm reweighted algorithm. Numerical results have shown the superiority of the proposed power-aware designs for active RIS.
|
As illustrated in the above section, the active RIS helps to suppress interference with a moderate number of REs in the presence of the strong cross channels. In practice, the active RIS amplifies incident signals at the cost of a biasing power source for each RE. For a power-aware design, the power consumed by the active RIS is expected to be as little as possible to achieve specific goals. With the purpose of doing so, we first reinvestigate the power consumption model of the active RIS, and then propose power-aware designs for the active RIS.
|
First, we propose a modified power consumption model for the active RIS, where the active RIS is allowed to close parts of inefficient REs for the sake of energy saving, and only those activated REs consume additional biasing and circuit operation power. This model later leads to two kinds of power-aware designs on active RIS.
|
A
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.