context
stringlengths
100
5.69k
A
stringlengths
100
3.76k
B
stringlengths
100
3.61k
C
stringlengths
100
5.61k
D
stringlengths
100
3.87k
label
stringclasses
4 values
C2⁢-WORDsuperscriptC2-WORD\textrm{C}^{2}\textrm{-WORD}C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT -WORD outperforms
A2⁢RCsuperscriptA2RC{\textrm{A}}^{2}{\textrm{RC}}A start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT RC and WORD in the sense of WNG.
selection of A2⁢RCsuperscriptA2RC{\textrm{A}}^{2}{\textrm{RC}}A start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT RC is optimal in the sense
the existing A2⁢RCsuperscriptA2RC{\textrm{A}}^{2}{\textrm{RC}}A start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT RC
A2⁢RCsuperscriptA2RC{\textrm{A}}^{2}{\textrm{RC}}A start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT RC (in the sense of WNG).
D
The two layer CNN S2I achieved worse even compared with the 1D variants, indicating that increase of the S2I depth is not beneficial.
For the purposes of this paper we use a variation of the database111https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals.
The two layer CNN S2I achieved worse even compared with the 1D variants, indicating that increase of the S2I depth is not beneficial.
The spectrogram S2I results are in contrary with the expectation that the interpretable time-frequency representation would help in finding good features for classification.
The names of the classes are depicted at the right along with the predictions for this example signal.
C
UAVs have several power levels and altitude levels. In the midst of extreme environments, UAVs cannot change its voltage dramatically but merely change to the adjacent power level [12]. Similarly, the altitude changing also has a limitation that only adjacent altitude level conversion is permitted in each move. We denote power set and altitude set to be P={P1,…,Pk,…,Pn⁢p}𝑃subscript𝑃1…subscript𝑃𝑘…subscript𝑃𝑛𝑝P=\{P_{1},...,P_{k},...,P_{np}\}italic_P = { italic_P start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_P start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , … , italic_P start_POSTSUBSCRIPT italic_n italic_p end_POSTSUBSCRIPT } and h={h1,…,hk,…,hn⁢h}ℎsubscriptℎ1…subscriptℎ𝑘…subscriptℎ𝑛ℎh=\{h_{1},...,h_{k},...,h_{nh}\}italic_h = { italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , … , italic_h start_POSTSUBSCRIPT italic_n italic_h end_POSTSUBSCRIPT }, respectively, where n⁢p𝑛𝑝npitalic_n italic_p is the number of power levels, and n⁢h𝑛ℎnhitalic_n italic_h is the number of altitude levels. We assume that the gap between different levels of power and altitude are equal. Let Δ⁢PΔ𝑃\Delta Proman_Δ italic_P and Δ⁢hΔℎ\Delta hroman_Δ italic_h denote the distance value of adjacent power levels and altitude levels, respectively.
In post-disaster scenarios, a great many of UAVs are required to support users [4]. Therefore, we propose aggregative game theory into such scenarios and permit UAV to learn in the constrained strategy sets. Because the aggregative game can integrate the impact of all other UAVs on one UAV, it reduces the complexity of receiving information and reduces the data processing capacity of UAVs. For instance, in a conventional game applied a scenario with N UAVs, it needs to analyze N strategies which decide noise and coverage sizes from each other individual UAV. However aggregative game only needs to process the integrated noise and coverage sizes of all other UAVs. Such an advantage is more obvious when the number of UAVs is extremely large since figuring out each others’ strategies is unrealistic [8]. In terms of constrained strategy sets, due to environmental factors such as violent winds [11] and tempestuous rainstorms, the action set of UAVs has a restriction that cannot switch rapidly between extreme high power or elevate altitude to low ones, but only levels adjacent to them [12]. For instance, the power can change from 1⁢m⁢W1𝑚𝑊1mW1 italic_m italic_W to 1.5⁢m⁢W1.5𝑚𝑊1.5mW1.5 italic_m italic_W in the first time slot and from 1.5⁢m⁢W1.5𝑚𝑊1.5mW1.5 italic_m italic_W to 2⁢m⁢W2𝑚𝑊2mW2 italic_m italic_W in the next one, but it cannot alter it directly from 1⁢m⁢W1𝑚𝑊1mW1 italic_m italic_W to 2⁢m⁢W2𝑚𝑊2mW2 italic_m italic_W. Therefore, the aggregative game with constrained sets is an ideal model for post-disaster scenarios.
Fig. 12 presents the sketch diagram of a UAV’s utility with power altering. The altitudes of UAVs are fixed. When other UAVs’ power profiles are altering, the interference increases and the curve moves down. The high interference will reduce the utility of the UAV. Fig. 12 also shows that utility decreases and increases with power improving. Small and large power both provide high utilities, which is because small power will save energy and large power will increase SNR. The UAV might select the largest power to increase utility. However, The more power one UAV uses, the more interference other UAVs will receive and other UAVs’ utilities will reduce. For the sake of enlarging the global utility, the largest power is not the optimal strategies for the whole UAV ad-hoc network. The best power will locate in some values that smaller than the largest power (The optimal value in the figure is a sketch value).
When UAVs need communications, and the signal to noise rate (SNR) mainly determines the quality of service. UAVs’ power and inherent noise are interferences for each other. Since there are hundreds of UAVs in the system, each UAV is unable to sense all the other UAVs’ power explicitly, but only sense and measure aggregative interference and treat it as an integral influence. Though increasing power can improve SNR, excessively large power causes more energy consumption and results in less running time. Therefore, proper power control for UAVs is needed to be carefully designed.
To investigate UAV networks, novel network models should jointly consider power control and altitude for practicability. Energy consumption, SNR and coverage size are key points to decide the performance of a UAV network [6]. Respectively, power control determines the signal to energy consumption and noise ratio (SNR) of UAV; altitude decides the number of users that can be supported [7], and it also determines the minimum value of SNR. It is because the higher altitude a UAV is, the more users it can support, and the higher SNR it requires. Therefore, power control and altitude are two essential factors. There have been extensive researches building models focusing on various network influence factors. For example, the work in [8] established a system model with channels and time slots selections. Authors of [9] constructed a coverage model which considered each agent’s coverage size on a network graph. However, such models usually consider only one specific characteristic of networks but ignore systems’ multiplicity, which would bring great loss in practice. Since UAVs will consume too much power to improve SNR or to increase coverage size. Even though UAV systems in 3D scenario with multi-factors of coverage and charging strategies have been studied by [7], it overlooks power control which means that UAVs might wast lots of energy. To sum up, in terms of UAV ad-hoc networks in post-disaster scenarios, power control and altitude which determine energy consumption, SNR, and coverage size ought to be considered to make the model credible [10].
C
This section discusses the advancements in semantic image segmentation using convolutional neural networks (CNNs), which have been applied to interpretation tasks on both natural and medical images (Garcia-Garcia et al., 2018; Litjens et al., 2017). Although artificial neural network-based image segmentation approaches have been explored in the past using shallow networks (Reddick et al., 1997; Kuntimad and Ranganath, 1999) as well as works which relied on superpixel segmentation maps to generate pixelwise predictions (Couprie et al., 2013), in this work, we focus on deep neural network based image segmentation models which are end-to-end trainable. The improvements are mostly attributed to exploring new neural architectures (with varying depths, widths, and connectivity or topology) or designing new types of components or layers.
Next, encoder-decoder segmentation networks (Noh et al., 2015) such as SegNet, were introduced (Badrinarayanan et al., 2015). The role of the decoder network is to map the low-resolution encoder feature to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies in the manner in which the decoder upsamples the lower resolution input feature maps. Specifically, the decoder uses pooling indices (Figure 5) computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. The architecture (Figure 5) consists of a sequence of non-linear processing layers (encoder) and a corresponding set of decoder layers followed by a pixel-wise classifier. Typically, each encoder consists of one or more convolutional layers with batch normalization and a ReLU non-linearity, followed by non-overlapping max-pooling and sub-sampling. The sparse encoding due to the pooling process is upsampled in the decoder using the max-pooling indices in the encoding sequence.
In order to preserve the contextual spatial information within an image as the filtered input data progresses deeper into the network, Long et al. (2015) proposed to fuse the output with shallower layers’ output. The fusion step is visualized in Figure 4.
The quantitative evaluation of segmentation models can be performed using pixel-wise and overlap based measures. For binary segmentation, pixel-wise measures involve the construction of a confusion matrix to calculate the number of true positive (TP), true negative (TN), false positive (FP), and false negative (FN) pixels, and then calculate various metrics such as precision, recall (also known as sensitivity), specificity, and overall pixel-wise accuracy. They are defined as follows:
As one of the first high impact CNN-based segmentation models, Long et al. (2015) proposed fully convolutional networks for pixel-wise labeling. They proposed up-sampling (deconvolving) the output activation maps from which the pixel-wise output can be calculated. The overall architecture of the network is visualized in Figure 3.
D
The UAVs’ trajectory on the x⁢y𝑥𝑦xyitalic_x italic_y-plane is assumed to follow the Smooth-Turn mobility model [34] that can capture the mobility of UAVs in the scenarios like patrolling. In this model, the UAV circles around a certain point on the horizontal plane (xy-plane) for an exponentially distributed duration until the UAV selects a new center point with the turning radius whose reciprocal obeys the normal distribution 𝒩⁢(0,σr2)𝒩0subscriptsuperscript𝜎2𝑟\mathcal{N}(0,\sigma^{2}_{r})caligraphic_N ( 0 , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ). According to [34], σr2subscriptsuperscript𝜎2𝑟\sigma^{2}_{r}italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT plays an important role in the degree of randomness. The UAVs are in the state of uniform linear motion in the vertical direction with different velocity vt⁢(r),zsubscript𝑣𝑡𝑟𝑧v_{t(r),z}italic_v start_POSTSUBSCRIPT italic_t ( italic_r ) , italic_z end_POSTSUBSCRIPT, where vt⁢(r),zsubscript𝑣𝑡𝑟𝑧v_{t(r),z}italic_v start_POSTSUBSCRIPT italic_t ( italic_r ) , italic_z end_POSTSUBSCRIPT obeys the uniform distribution vt⁢(r),z∼𝒰⁢(vt⁢(r),z,min,vt⁢(r)⁢z,max)similar-tosubscript𝑣𝑡𝑟𝑧𝒰subscript𝑣𝑡𝑟𝑧minsubscript𝑣𝑡𝑟𝑧maxv_{t(r),z}\sim\mathcal{U}(v_{t(r),z,\text{min}},v_{t(r)z,\text{max}})italic_v start_POSTSUBSCRIPT italic_t ( italic_r ) , italic_z end_POSTSUBSCRIPT ∼ caligraphic_U ( italic_v start_POSTSUBSCRIPT italic_t ( italic_r ) , italic_z , min end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_t ( italic_r ) italic_z , max end_POSTSUBSCRIPT ). Moreover, aiming to maintain the communication link with the r-UAV, the t-UAVs keep their positions in a limited region at arbitrary time where the distance between the t-UAV and the r-UAV is less than Dr,maxsubscript𝐷r,maxD_{\text{r,max}}italic_D start_POSTSUBSCRIPT r,max end_POSTSUBSCRIPT. The distance between UAVs is also limited no less than Dr,minsubscript𝐷r,minD_{\text{r,min}}italic_D start_POSTSUBSCRIPT r,min end_POSTSUBSCRIPT to ensure the flight safety. The relationship between the position and attitude (equations (8)-(10) in [35]) is used to determine the UAVs’ attitude.
A conceptual frame structure is designed which contains two types of time slots. One is the exchanging slot (e-slot) and the other is the tracking slot (t-slot). Let us first focus on the e-slot. It is assumed that UAVs exchange MSI every T𝑇Titalic_T t-slots, i.e., in an e-slot, to save resource for payload transmission. In the MSI exchanging period of the e-slot t𝑡titalic_t, the r-UAV exchanges its historical MSI with each t-UAV and the t-UAV only exchanges its historical MSI with r-UAV over the low-rate control links that work in the lower-frequency band [36]. Then t-UAVs and r-UAV perform codeword selection. Employing the GP-based MSI prediction algorithm proposed in [31], each t-UAV predicts the MSI of r-UAV and r-UAV predicts the MSI of all t-UAVs in the next T𝑇Titalic_T t-slots. In the tracking error bounding period, the UAVs estimate the TE of AOAs and AODs based on the GP prediction error. Compared to e-slot, t-slot does not have the MSI exchanging, prediction and error bounding, but has the TE-aware codeword selection. Specifically, in t-slot the t-UAVs and r-UAV achieve the adaptive beamwidth control against AODs/AOAs prediction errors by employing the TE-aware codeword selection. Compared to the motion-aware protocol in [31], the new TE-aware protocol can be applied to the UAV mmWave network with higher mobility including random trajectories and high velocity. Since the new TE-aware protocol contains the error bounding and TE-aware codeword selection periods, it is able to deal with the beam tracking error caused by high mobility of UAVs. Next, we will detail how to bound the TE and how to select the proper codeword with suitable beamwidth against the TE in the following subsections.
Moreover, the data block of MSI is set as BMSI=nMSI×T×BMSIsubscript𝐵MSIsubscript𝑛MSI𝑇subscript𝐵MSIB_{\text{MSI}}=n_{\text{MSI}}\times T\times B_{\text{MSI}}italic_B start_POSTSUBSCRIPT MSI end_POSTSUBSCRIPT = italic_n start_POSTSUBSCRIPT MSI end_POSTSUBSCRIPT × italic_T × italic_B start_POSTSUBSCRIPT MSI end_POSTSUBSCRIPT bits, where nMSI=6subscript𝑛MSI6n_{\text{MSI}}=6italic_n start_POSTSUBSCRIPT MSI end_POSTSUBSCRIPT = 6 is the dimension of MSI at each slot, T=50𝑇50T=50italic_T = 50 is the number of slots between the adjacent MSI exchanging, and each dimension of MSI at each slot is represented by BMSI=4subscript𝐵MSI4B_{\text{MSI}}=4italic_B start_POSTSUBSCRIPT MSI end_POSTSUBSCRIPT = 4 bits. The transmission rate of lower band is set as CLB=500subscript𝐶LB500C_{\text{LB}}=500italic_C start_POSTSUBSCRIPT LB end_POSTSUBSCRIPT = 500 kbps [38], the data block is set as Bdata=1subscript𝐵data1B_{\text{data}}=1italic_B start_POSTSUBSCRIPT data end_POSTSUBSCRIPT = 1 Mbit, Cavesubscript𝐶aveC_{\text{ave}}italic_C start_POSTSUBSCRIPT ave end_POSTSUBSCRIPT is the average rate of mmWave band, Dk,maxsubscript𝐷𝑘D_{k,\max}italic_D start_POSTSUBSCRIPT italic_k , roman_max end_POSTSUBSCRIPT is the maximum distance between the t-UAV and the r-UAV, and c𝑐citalic_c is the velocity of light. As the computational complexity of the algorithms for the r-UAV is higher than that of t-UAVs, the local processing time mainly depends on the time for the r-UAV to perform the beam tracking algorithms, which is estimated based on the times of multiplication and addition, and the CPU of UAVs. The CPU Intel i7-8550u [39] with processor base frequency 1.8 GHz is considered in the simulation, which is adopted by a commonly-used onboard computer “Mainfold 2” supporting many types of UAVs such as DJI Matrice 600 pro, DJI Matrice 600 210 series, and so on [40].
Thanks to the integrated sensors, such as inertial measurement unit (IMU) and global position system (GPS), the UAV is able to derive its own MSI. However, the r-UAV also needs the MSI of all t-UAVs and each t-UAV needs the r-UAV’s MSI for beam tracking, which is challenging for the r-UAV/t-UAVs.
Specifically, the r-UAV/t-UAV’s historical MSI is first exchanged with the t-UAV/r-UAV over a lower-frequency band and then the t-UAV will predict the future MSI of the r-UAV based on the historical MSI by using the GP-based MSI prediction model.
C
The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditional mean square error. This leads the nonnegative supermartingale convergence theorem not to be applied directly
III. The co-existence of random graphs, subgradient measurement noises, additive and multiplicative communication noises are considered. Compared with the case with only a single random factor, the coupling terms of different random factors inevitably affect the mean square difference between optimizers’ states and any given vector. What’s more, multiplicative noises relying on the relative states between adjacent local optimizers make states, graphs and noises coupled together. It becomes more complex to estimate the mean square upper bound of the local optimizers’ states (Lemma 3.1). We firstly employ the property of conditional independence to deal with the coupling term of different random factors. Then, we prove that the mean square upper bound of the coupling term between states, network graphs and noises depends on the second-order moment of the difference between optimizers’ states and the given vector. Finally, we get an estimate of the mean square increasing rate of the local optimizers’ states in terms of the step sizes of the algorithm (Lemma 3.2).
We first estimate the mean square increasing rate of the states in Lemma III.2, and then substitute this rate into the recursive inequality (11) of the conditional mean square error between the state and the global optimal solution.
The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditional mean square error. This leads the nonnegative supermartingale convergence theorem not to be applied directly
To this end, we estimate the upper bound of the mean square increasing rate of the local optimizers’ states at first (Lemma 3.2). Then we substitute this upper bound into the Lyapunov function difference inequality of the consensus error, and obtain the estimated convergence rate of mean square consensus (Lemma 3.3). Further, the estimations of these rates are substituted into the recursive inequality of the conditional mean square error between the states and the global optimal solution. Finally, by properly choosing the step sizes, we prove that the states of all local optimizers converge to the same global optimal solution almost surely by the non-negative supermartingale convergence theorem. The key lies in that the algorithm step sizes should be chosen carefully to eliminate the possible increasing effect caused by the linear growth of the subgradients and to balance the rates between achieving consensus and seeking the optimal solution.
D
H1,H2subscript𝐻1subscript𝐻2H_{1},H_{2}italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and H𝐻Hitalic_H are defined as H1⁢(s)=Kv⁢Kp⁢G⁢(s)subscript𝐻1𝑠subscript𝐾𝑣subscript𝐾𝑝𝐺𝑠H_{1}(s)=K_{v}K_{p}G(s)italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_s ) = italic_K start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT italic_K start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT italic_G ( italic_s ), H2⁢(s)=Kv⁢Kp⁢G⁢(s)subscript𝐻2𝑠subscript𝐾𝑣subscript𝐾𝑝𝐺𝑠H_{2}(s)=K_{v}K_{p}G(s)italic_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_s ) = italic_K start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT italic_K start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT italic_G ( italic_s ), and H⁢(s)=Kv⁢s⁢G⁢(s)+1+Kv⁢Kp⁢G⁢(s)𝐻𝑠subscript𝐾𝑣𝑠𝐺𝑠1subscript𝐾𝑣subscript𝐾𝑝𝐺𝑠H(s)=K_{v}sG(s)+1+K_{v}K_{p}G(s)italic_H ( italic_s ) = italic_K start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT italic_s italic_G ( italic_s ) + 1 + italic_K start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT italic_K start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT italic_G ( italic_s ).
One can easily obtain the transfer function from the reference trajectories to the actual position and velocity as
where vs,ksubscript𝑣𝑠𝑘v_{s,k}italic_v start_POSTSUBSCRIPT italic_s , italic_k end_POSTSUBSCRIPT is the sampled velocity along the path at time step k𝑘kitalic_k and T𝑇Titalic_T is the sampling time.
Given (3), one can obtain a discrete time model with sampling time T=2.5⁢ms𝑇2.5msT=2.5\mathrm{ms}italic_T = 2.5 roman_ms as
Following (4), (5), (6) and (7), we obtain a linear time varying system of the form 𝐳k+1=Ak⁢𝐳k+Bk⁢𝐮k+𝐝ksubscript𝐳𝑘1subscriptA𝑘subscript𝐳𝑘subscriptB𝑘subscript𝐮𝑘subscript𝐝𝑘\mathbf{z}_{k+1}=\mathrm{A}_{k}\mathbf{z}_{k}+\mathrm{B}_{k}\mathbf{u}_{k}+%
C
This indicates that as the compression accuracy becomes smaller, its impact exhibits “marginal effects”.
In other words, when the compression errors are not the bottleneck for the convergence, sacrificing the communication costs for faster convergence will reduce the communication efficiency.
In decentralized optimization, efficient communication is critical for enhancing algorithm performance and system scalability. One major approach to reduce communication costs is considering communication compression, which is essential especially under limited communication bandwidth.
When b=6𝑏6b=6italic_b = 6 or k=20𝑘20k=20italic_k = 20, the trajectories of CPP are very close to that of exact Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B, which indicates that when the compression errors are small, they are no longer the bottleneck of convergence.
The existence of compression errors may result in inferior convergence performance compared to uncompressed or centralized algorithms. For example, the methods considered by [41, 42, 43, 44, 45, 46] can only guarantee to reach a neighborhood of the desired solutions when the compression errors exist.
A
Moreover, a smaller batch size degrades overall performance, including downstream classification accuracy.
In our experiments, we will use the same pre-trained model parameters to initialise the models for different downstream tasks. During fine-tuning, we fine-tune the parameters of all the layers, including the self-attention and token embedding layers.
(b), (c) the fine-tuning procedure for note-level and sequence-level classification. Apart from the last few output layers, both pre-training and fine-tuning use the same architecture.
To train Transformers, it is required that all input sequences have the same length. For both REMI and CP, we divide the token sequence for each entire piece into a number of shorter sequences with equal sequence length 512, zero-padding those at the end of a piece to 512 with an appropriate number of Pad tokens.
For fine-tuning, we create training, validation and test splits for each of the three datasets of the downstream tasks with the 8:1:1 ratio at the piece level (i.e., all the 512-token sequences from the same piece are in the same split).
D
A. Graves, S. Fernández, F. Gomez, and J. Schmidhuber, “Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks,” in Proc. 23rd Int. Conf. Mach. Learning (ICML), Pittsburgh, USA, Jun. 2006, pp. 369–376.
H. Sun, X. Chen, Q. Shi, M. Hong, X. Fu, and N. D. Sidiropoulos, “Learning to optimize: Training deep neural networks for interference management,” IEEE Trans. Signal Process., vol. 66, no. 20, pp. 5438–5453, Oct. 2018.
H. Sun, X. Chen, Q. Shi, M. Hong, X. Fu, and N. D. Sidiropoulos, “Learning to optimize: Training deep neural networks for interference management,” IEEE Trans. Signal Process., vol. 66, no. 20, pp. 5438–5453, Oct. 2018.
M. Schuster and K. Paliwal, “Bidirectional recurrent neural networks,” IEEE Trans. Signal Process., vol. 45, no. 11, pp. 2673–2681, Nov. 1997.
M. Schuster and K. Paliwal, “Bidirectional recurrent neural networks,” IEEE Trans. Signal Process., vol. 45, no. 11, pp. 2673–2681, Nov. 1997.
C
The computational running time was analysed for the for B2, B6 and the more complex InceptionV3 (IV3) model, both fully re-trained (F) and with transfer learning (TL) on the PCAM dataset. The results are shown in Table 2. Note that the time corresponds to the average time observed for one epoch. We can compare the model architecture and the hardware GPUs acceleration effects. As expected, the running time is increasing with the complexity and depth of the model. The IV3-F model takes 4 to 10 times longer to train than the simple 2 convolutional layers B2 model, depending on the GPU card utilised. The B6 CNN model is taking 1.7 to 2 times longer than the B2 model to train. With the InceptionV3 model, using transfer learning is obviously saving a lot of training time, as a full model training is taking ∼similar-to\sim∼3 times longer to train on all GPU models. In fact, even though the IVF-TL model (transfer learning) is much more complex, the running time is comparable to the B2 and B6 models. Regarding the different GPU cards tested here, more recent and powerful GPU cards decrease the computing time quite drastically, with an acceleration factor between 5 and 12 times for the most recent architecture tested here (A100) on all the CNN models compared to the oldest model tested here (K80). It is worth noting that the deepest model tested here can be fully trained in about one hour with a V100 or A100 GPU card.
Figure 4: Boxplots showing the AUC score for different CNN models for fully re-trained models (F) or with transfer learning (TL).
Precise staging by expert pathologists of breast cancer axillary nodes, a tissue commonly used for the detection of early signs of tumor spreading, is an essential task that will determine the patient’s treatment and his chances of recovery. However, it is a difficult task that was shown to be prone to misclassification. Algorithms, and in particular deep learning based convolutional neural networks, can help the experts in this task by analyzing fully digitized slides of microscopic stained tissue sections. In this study, I evaluated twelve different CNN architectures and different hardware acceleration devices for breast cancer classification on two different public datasets consisting of hundreds of thousands of images. The performance of hardware acceleration devices can improve the training time by a factor of five to twelve, depending on the model used. On the other hand, increasing the convolutional depth increases the training time by a factor of four to six, depending on the acceleration device used. More complex models tend to perform better than very simple ones, especially when fully retrained on the digital pathology dataset, but the relationship between model complexity and performance is not straightforward. Transfer learning from imagenet always performs worse than fully retraining the models. Fine-tuning the hyperparameters of the model improves the results, with the best model tested in this study showing very high performance, comparable to current state–of–the–art models.
Table 2: Run time in seconds for one epoch on different GPU architectures. NbCU: number of CUDA cores. Pp: processing power in GFlops. TL: transfer learning. F: full retraining.
The computational running time was analysed for the for B2, B6 and the more complex InceptionV3 (IV3) model, both fully re-trained (F) and with transfer learning (TL) on the PCAM dataset. The results are shown in Table 2. Note that the time corresponds to the average time observed for one epoch. We can compare the model architecture and the hardware GPUs acceleration effects. As expected, the running time is increasing with the complexity and depth of the model. The IV3-F model takes 4 to 10 times longer to train than the simple 2 convolutional layers B2 model, depending on the GPU card utilised. The B6 CNN model is taking 1.7 to 2 times longer than the B2 model to train. With the InceptionV3 model, using transfer learning is obviously saving a lot of training time, as a full model training is taking ∼similar-to\sim∼3 times longer to train on all GPU models. In fact, even though the IVF-TL model (transfer learning) is much more complex, the running time is comparable to the B2 and B6 models. Regarding the different GPU cards tested here, more recent and powerful GPU cards decrease the computing time quite drastically, with an acceleration factor between 5 and 12 times for the most recent architecture tested here (A100) on all the CNN models compared to the oldest model tested here (K80). It is worth noting that the deepest model tested here can be fully trained in about one hour with a V100 or A100 GPU card.
C
Then, the optimal complex wavefront modulation for the neural étendue expander would be the inverse Fourier transform of the target scene, and, as such, we do not require any additional modulation on the SLM. The SLM therefore can be set to zero-phase modulation.
To assess whether the optimized neural étendue expander ℰℰ\mathcal{E}caligraphic_E, shown in Fig. 1b, has learned the image statistics of the training set we evaluate the virtual frequency modulation ℰ~~ℰ\widetilde{\mathcal{E}}over~ start_ARG caligraphic_E end_ARG, defined as the spectrum of the generated image with the neural étendue expander and the zero-phase SLM modulation as
To further understand this property of a neural étendue expander, we consider the reconstruction loss ℒTsubscriptℒ𝑇\mathcal{L}_{T}caligraphic_L start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT for a specific target image T𝑇Titalic_T.
If we generalize this single-image case to diverse natural images, the neural étendue expander is expected to preserve the common frequency statistics of natural images, while the SLM fills in the image-specific residual frequencies to generate a specific target image.
Therefore, obtaining the optimal neural étendue expander, which minimizes the reconstruction loss ℒTsubscriptℒ𝑇\mathcal{L}_{T}caligraphic_L start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT, results in the virtual frequency modulation ℰ~~ℰ\widetilde{\mathcal{E}}over~ start_ARG caligraphic_E end_ARG that resembles the natural-image spectrum ℱ⁢(T)ℱ𝑇\mathcal{F}(T)caligraphic_F ( italic_T ) averaged over diverse natural images. Also, the retinal frequency filter ℱ⁢(f)ℱ𝑓\mathcal{F}(f)caligraphic_F ( italic_f ) leaves the higher spectral bands outside of the human retinal resolution unconstrained. This allows the neural étendue expander to push undesirable energy towards higher frequency bands, which then manifests as imperceptible high-frequency noise to human viewers.
C
Medical imaging methods such as Computational Tomography (CT) and Magnetic Resonance Imaging (MRI) are essential to clinical diagnoses and surgery planning. Hence, high-resolution medical images are desirable to provide necessary visual information about the human body. In recent years, many DL-based methods have also been proposed for medical image SR.
et al., 2018) believed that low-resolution images in the real world constitute a specific distribution in high-dimensional space, and use a generative adversarial network to generate low-resolution images consistent with this distribution from high-resolution images. After that, Yuan et al. (Yuan
In recent years, more and more Transformer-based models have been proposed. For example, Chen et al. proposed the Image Processing Transformer (IPT (Chen et al., 2021)) which was pre-trained on large-scale datasets. In addition, contrastive learning is introduced for different image-processing tasks. Therefore, the pre-trained model can efficiently be employed on the desired task after finetuning. However, IPT (Chen et al., 2021) relies on large-scale datasets and has a large number of parameters (over 115.5M parameters), which greatly limits its application scenarios. To solve this issue, Liang et al. proposed the SwinIR (Liang et al., 2021) for image restoration based on the Swin Transformer (Liu et al., 2021b). Specifically, the Swin Transformer blocks (RSTB) are proposed for feature extraction and DIV2K+Flickr2K is used for training. To improve the lack of direct interaction between different windows in SwinIR. Zamir (Zamir et al., 2022) et al. proposed Restormer to reconstruct high-quality images by embedding CNNs within Transformer and performing local-global learning at multiple scales. Chen et al. proposed CAT (Chen et al., 2022d) to extend the attention region and aggregate features across different windows. Then, to activate more of the pixels that Transformer focuses on, Chen et al. proposed HAT (Chen
et al., 2023c) proposed a Cross-receptive Focused Inference Network (CFIN) that can incorporate contextual modeling to achieve good performance with limited computational resources. Zhu et al. (Zhu et al., 2023) designed an Attention Retractable Frequency Fusion Transformer (ARFFT) to strengthen the representation ability and extend the receptive field to the whole image. Li et al. (Li et al., 2023d) proposed a concise and powerful Pyramid Clustering Transformer Network (PCTN) for lightweight SISR. Chen et al. (Chen
For instance, Chen et al. proposed a Multi-level Densely Connected Super-Resolution Network (mDCSRN (Chen et al., 2018)) with GAN-guided training to generate high-resolution MR images, which can train and infer quickly. In (Wang
D
SHAP visualisations such as that in Fig. 1(c) can be sparse, indicating that only few spectro-temporal bins contribute to the classifier output. A comparison of the time waveform in Fig. 1(a) and the SHAP values in Fig. 1(c) shows that this particular classifier essentially ignores information contained in non-speech regions, focusing instead upon the speech interval between approximately 1 and 2 seconds and, furthermore between frequencies mostly below 1.5 kHz.
It shows the degree to which each spectro-temporal bin contributes to the classifier output. Darker red points indicate the spectro-temporal bins which lend stronger support for the positive class (here bona fide). In contrast, darker blue points indicate greater support for the negative class (here, spoofed speech).
In the remainder of this paper we describe our use of DeepSHAP to help explain the behaviour of spoofing detection systems. We show a number of illustrative examples for which the input utterances, all drawn from the ASVspoof 2019 LA database [13], are chosen specially to demonstrate the potential insights which can be gained. Given the difficulty in visualising true SHAP values, in the following we present average temporal or spectral results. Given our focus on spoofing detection, we present results for both bona fide and spoofed utterances and the temporal or spectral regions which favour either bona fide or spoofed classes. Results hence reflect where, either in time or frequency, the model has learned to focus attention and hence help to explain its behaviour in terms of how the model responds to a particular utterance.
Fig. 2 shows the results of SHAP analysis for the ‘LA_E_1832578’ utterance and the PC-DARTS classifier. The plot shows the time waveform (a) and the temporal variation in SHAP values averaged across the full spectrum (b). This first example shows that the classifier has learned to focus predominantly upon non-speech intervals. The support in speech intervals for either class is comparatively lower. These observations are unexpected; it is assumed a priori that spoofed speech detection systems should operate upon speech. This observation corroborates the findings in [17], and also [19] which shows that reliable bona fide/spoof decisions might even be inferred from the length of the non-speech interval.
A second visualisation focusing on this specific region is displayed in Fig. 1(d). Ignoring for now whether or not the SHAP values are positive or negative, it exhibits a high degree of correlation to the fundamental frequency and harmonics in the spectrogram, indicating the focus of the classifier on these same components. Last, while the presence of dark blue traces in Fig. 1(d) indicate components of the spectrogram which favour the negative class, the overall dominance of red colours (though not all dark red) indicate a greater support for the positive class (the classifier output correctly indicates bona fide speech).
D
CBFs that account for uncertainties in the system dynamics have been considered in two ways. The authors in [10] and [11] consider input-to-state safety to quantify possible safety violation. Conversely, the work in [12] proposes robust CBFs to guarantee robust safety by accounting for all permissible errors within an uncertainty set. Input delays within CBFs were discussed in [13, 14]. CBFs that account for state estimation uncertainties were proposed in [15] and [16]. Relying on the same notion of measurement robust CBFs as in [15], the authors in [17] present empirical evaluations on a segway. While the notion of ROCBFs that we present in this paper is inspired by measurement-robust CBFs as presented in [15], we also consider uncertainties in the system dynamics and focus on learning valid CBFs from expert demonstrations. Similar to the notion of ROCBF, the authors in [18] consider additive disturbances in the system dynamics and state-estimation errors jointly.
Control barrier functions (CBFs) were introduced in [3, 4] to render a safe set controlled forward invariant. A CBF defines a set of safe control inputs that can be used to find a minimally invasive safety-preserving correction to a nominal control law by solving a convex quadratic program. Many variations and extensions of CBFs appeared in the literature, e.g., composition of CBFs [5], CBFs for multi-robot systems [6], CBFs encoding temporal logic constraints [7], and CBFs for systems with higher relative degree [8]. Finally, CBFs and Hamilton-Jacobi were found to share connections  [9].
CBFs that account for uncertainties in the system dynamics have been considered in two ways. The authors in [10] and [11] consider input-to-state safety to quantify possible safety violation. Conversely, the work in [12] proposes robust CBFs to guarantee robust safety by accounting for all permissible errors within an uncertainty set. Input delays within CBFs were discussed in [13, 14]. CBFs that account for state estimation uncertainties were proposed in [15] and [16]. Relying on the same notion of measurement robust CBFs as in [15], the authors in [17] present empirical evaluations on a segway. While the notion of ROCBFs that we present in this paper is inspired by measurement-robust CBFs as presented in [15], we also consider uncertainties in the system dynamics and focus on learning valid CBFs from expert demonstrations. Similar to the notion of ROCBF, the authors in [18] consider additive disturbances in the system dynamics and state-estimation errors jointly.
Learning with CBFs: Approaches that use CBFs during learning typically assume that a valid CBF is already given, while we focus on constructing CBFs so that our approach can be viewed as complementary. In [19], it is shown how safe and optimal reward functions can be obtained, and how these are related to CBFs. The authors in [20] use CBFs to learn a provably correct neural network safety guard for kinematic bicycle models. The authors in [21] consider that uncertainty enters the system dynamics linearly and propose to use robust adaptive CBFs, as originally presented in [22], in conjunction with online set membership identification methods. In [23], it is shown how additive and multiplicative noise can be estimated online using Gaussian process regression for safe CBFs. The authors in [24] collect data to episodically update the system model and the CBF controller. A similar idea is followed in [25] where instead a projection with respect to the CBF condition is episodically learned. Imitation learning under safety constraints imposed by a Lyapunov function was proposed in [26]. Further work in this direction can be found in
A promising research direction is to learn CBFs from data. The authors in [36] construct CBFs from safe and unsafe data using support vector machines, while authors in [37] learn a set of linear CBFs for clustered datasets. The authors in [38] proposed learning limited duration CBFs and the work in [39] learns signed distance fields that define a CBF. In [40], a neural network controller is trained episodically to imitate an already given CBF. The authors in [41] learn parameters associated with the constraints of a CBF to improve feasibility. These works present empirical validations, but no formal correctness guarantees are provided. The authors in [42, 43, 44, 45] propose counter-example guided approaches to learn Lyapunov and barrier functions for known closed-loop systems, while Lyapunov functions for unknown systems are learned in [46]. In [47, 48, 49] control barrier functions are learned and post-hoc verified, e.g., using Lipschitz arguments and satisfiability modulo theory, while [50] uses a counter-example guided approach. As opposed to these works, we make use of safe expert demonstrations. Expert trajectories are utilized in [51] to learn a contraction metric along with a tracking controller, while motion primitives are learned from expert demonstrations in [52]. In our previous work [53], we proposed to learn CBFs for known nonlinear systems from expert demonstrations. We provided the first conditions that ensure correctness of the learned CBF using Lipschitz continuity and covering number arguments. In [54] and [55], we extended this framework to partially unknown hybrid systems. In this paper, we focus on state estimation and provide sophisticated simulations of our method in CARLA.
C
90∘superscript9090^{\circ}90 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT difference in Tx- or Rx-polarization angles, as described
For the low SNR regime such as 5 dB SNR, the theoretically derived optimal Tx-polarization angles themselves have insignificant differences from numerically derived optimal Tx-polarization angles. The simulation results for the low SNR regime are omitted owing to the page limit.
The differences between theoretically and numerically obtained optimal Tx-polarization angles are considerable. This is due to the fact that the approximation (8) is less accurate at higher SNRs.
high SNR regime, utilizing our joint polarization pre-post coding improves PR-MIMO channel capacity with around 5 dB, 4 dB, and 3dB SNR gains in
and receiver and uses random polarization, in the low SNR regime (below 3 dB). The degrees of freedom (slop at high SNR) are the same in all three cases, since they are determined by the number of antenna ports.
A
\end{split}start_ROW start_CELL italic_A start_POSTSUBSCRIPT roman_Σ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_U end_POSTSUPERSCRIPT ( italic_λ italic_R ) end_CELL start_CELL = italic_A start_POSTSUBSCRIPT roman_Σ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_U end_POSTSUPERSCRIPT ( italic_R ) , end_CELL end_ROW start_ROW start_CELL italic_A start_POSTSUBSCRIPT roman_Σ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N italic_U end_POSTSUPERSCRIPT ( italic_λ italic_R ) end_CELL start_CELL = italic_A start_POSTSUBSCRIPT roman_Σ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N italic_U end_POSTSUPERSCRIPT ( italic_R ) . end_CELL end_ROW
We also verify that multiplying a regularizer by a scalar does not change the compliance measure which is consistent with recovery guarantees.
Consider a cone Σ⊂ℋΣℋ\Sigma\subset\mathcal{H}roman_Σ ⊂ caligraphic_H and assume that Σ−ΣΣΣ\Sigma-\Sigmaroman_Σ - roman_Σ is a union of subspaces, (Σ−Σ)∩S⁢(1)ΣΣ𝑆1(\Sigma-\Sigma)\cap S(1)( roman_Σ - roman_Σ ) ∩ italic_S ( 1 ) is compact, and Σ≠span⁢(x)Σspan𝑥\Sigma\neq\mathrm{span}(x)roman_Σ ≠ roman_span ( italic_x ) for each x∈Σ𝑥Σx\in\Sigmaitalic_x ∈ roman_Σ.
First γ⁢z∈𝒯R⁢(F⁢Σ)𝛾𝑧subscript𝒯𝑅𝐹Σ\gamma z\in\mathcal{T}_{R}(F\Sigma)italic_γ italic_z ∈ caligraphic_T start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT ( italic_F roman_Σ ) if, and only if, there exists x∈Σ𝑥Σx\in\Sigmaitalic_x ∈ roman_Σ such that
Let x∈Σ𝑥Σx\in\Sigmaitalic_x ∈ roman_Σ. We remark that, the tangent cone is invariant by scalar multiplication:
D
The above optimization is combinatorial in nature as there are (NM)binomial𝑁𝑀\binom{N}{M}( FRACOP start_ARG italic_N end_ARG start_ARG italic_M end_ARG ) possible combinations, which are nearly impossible to exhaust in practice except for very small M𝑀Mitalic_M. Therefore, we randomly sample a large number (say 10,0001000010,00010 , 000) of combinations and pick the maximizing combination as an approximate solution.
To implement template selection per Eq. (6), the knowledge of landmarks is assumed. However, even such knowledge is nonexistent before template selection. Therefore, we proposed to utilize potential key points to substitute landmarks. In particular, we utilize the classical multi-scale detector, SIFT, to find key points, where landmarks are likely to co-locate.
Figure 5: Similarities of potential key points vs. landmarks. The correlation coefficient (CC) of potential key points and landmarks is 0.462, thus we think it is feasible to replace landmarks with potential key points when estimating similarities.
Q: How good is the use of SIFT key points as substitutes for landmarks? Figure 5 demonstrate the relationship between landmarks and potential key points from handcraft methods in feature level (Eq. (9)).
In this paper, we propose a framework named Sample Choosing Policy (SCP) to find the most annotation-worthy images as templates. First, to handle the situation of no landmark label, we choose handcrafted key points as substitutes for landmarks of interest. Second, to replace the MRE, we proposed to use a similarity score between a template and the rest based on the features of such potential key points.
A
This may be because task 3 was the only task where registration was performed between two follow-up time points.
The presence of similar deformations and structures in these scans likely rendered the registration between these two time points comparatively easier than the other three tasks.
Following close coordination with the clinical experts of the organizing committee (H.A., M.B., B.W., J.S., E.C., J.R., S.A., M.M.), the time-window between the two paired scans of each patient was decided to be selected such that i) the scans of the two time-points had sufficient apparent tissue deformations, and ii) confounding effects of surgically induced contrast enhancement (Albert et al., 1994; Wen et al., 2010) were avoided.
The presence of similar deformations and structures in these scans likely rendered the registration between these two time points comparatively easier than the other three tasks.
The presence of similar deformations and structures in these scans likely rendered the registration between these two time points comparatively easier than the other three tasks.
A
θ∈[θ¯,θ⋆)𝜃¯𝜃superscript𝜃⋆\theta\in[\bar{\theta},\theta^{\star})italic_θ ∈ [ over¯ start_ARG italic_θ end_ARG , italic_θ start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT ).
If there does not exist a neighborhood of θ⋆superscript𝜃⋆\theta^{\star}italic_θ start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT in
there exists a neighborhood of θ⋆superscript𝜃⋆\theta^{\star}italic_θ start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT in which
the ϵ−limit-fromitalic-ϵ\epsilon-italic_ϵ -neighborhood of θ⋆superscript𝜃⋆\theta^{\star}italic_θ start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT for some
of convergence of θ⋆superscript𝜃⋆\theta^{\star}italic_θ start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT.
A
Control of PDE systems has been widely explored over the years [15, 16, 17, 18]. Similar to ODEs, notions of ISSt for PDE systems have garnered a lot of attention recently (see survey paper [19]). For example, PDE ISSt have been explored for reaction-diffusion systems [20], hyperbolic systems [21], [22], parabolic systems [23], parabolic PDE systems with boundary disturbances [24], [25], systems with distributed time-delays [26], and diffusion equation with time-varying distributed coefficients [27]. Notions of practical ISSt for PDEs have been explored in [28]. In contrast to ISSt, ISSf has remained mostly unexplored in the context of PDEs. In [29], safety verification using barrier functionals for homogeneous distributed parameter systems has been considered. In this work, numerical strategies based on semi-definite programming has been used for the construction of barrier functionals. However, control performance under disturbances has not been considered in this work. Given the importance of maintaining system safety under disturbances, it is critical to consider control system design for PDE systems under these disturbances. In [30], safe control of Stefan system under disturbances is considered. In the framework proposed in [30], an operator is allowed to manipulate the control input as long as safety constraints are satisfied; however, the safety control overrides the operator control signal realizing a feedback control ultimately guaranteeing safety. The feedback law for safety control is designed utilizing backstepping, quadratic programming, and a control barrier function. In our current work, we attempt an alternate approach to achieve safety control of a class of linear parabolic PDEs under disturbances. Specifically, we design a control law that employs feedback from the boundaries and an in-domain point, by utilizing a practical ISSf (pISSf) barrier functional characterization (inspired by the notion presented in [4]). Subsequently, utilizing ISSt Lyapunov functional characterization, we prove that such designed safety control is also an input-to-state stabilizing control under certain additional conditions. In this way, we ultimately propose a feedback control law that satisfies the conditions of both ISSt and pISSf.
In this paper, we have explored safe control of a class of linear Parabolic PDEs under disturbances. First, we defined unsafe sets and distance of the system states from such unsafe sets. Next, we constructed both control barrier and Lyapunov functional in order to develop a design framework for the controller under specific safety and stability guarantees. Additionally, we have applied our proposed strategy in the context of battery management system using boundary coolant control. We present the efficacy of our proposed methodologies through simulation studies under nominal conditions and disturbed conditions. The simulation study shows that the proposed approach can be beneficial to maintain safety limits. As a future work, we plan to extend the framework to (i) n𝑛nitalic_n-dimensional PDEs and apply it towards thermal management of large-scale battery packs, and (ii) PDEs with saturation on input magnitude and rates.
In the subsequent sections, our approach of finding the control gains are as follows. First, in Section 3, we find the conditions on control gains that satisfy the pISSf criterion in (9). Next, in Section 4, we show that the pISSf conditions on control gains additionally guarantee ISSt for the system in the sense of (10).
In this section, we have derived the conditions on control gains for which the system is pISSf. In the following section, we will show that the derived conditions for pISSf ensures ISSt for the system.
In light of the aforementioned discussion, the main contributions of this paper is the following: Building upon the existing literature, we extend PDE safety research by designing a feedback based control that satisfies both pISSf and ISSt under disturbances, utilizing pISSf barrier functional characterization and ISSt Lyapunov characterization. As a case study, we consider a one-dimensional thermal PDE model for a battery module with a boundary coolant control. Next, we construct a control barrier functional and control Lyapunov functional for obtaining analytical guarantees for safety and stability for the battery system. The analytical guarantees allows us design the controller gains for actuating the boundary coolant. The rest of the paper is organized as follows. Section 2 sets up the problem by discussing the battery module thermal model and formulating control objectives. Sections 3 and 4 detail the pISSf-ISSt framework. Section 4 presents case studies to illustrate the proposed framework. Finally, Section 5 concludes the paper.
D
In this section, we implement and evaluate a complete testbed system for our spectrum allocation system. We use the testbed to collect training samples, which are then used
Allocation based on SSs parameters is implicitly based on real-time channel conditions, which is important for accurate and optimized spectrum allocation as the conditions affecting signal attenuation (e.g., air, rain, vehicular traffic) may change over time.
The inference time complexity of all our ML approaches is linear in the size of the input, and thus, the inference time in practice is minimal (a fraction of a second). The training time complexity of most ML models depends on the training samples and the resulting convergence, and is thus, uncertain. The actual training times incurred from our set of
Overall, we implemented a Python repository running on Linux that transmits and receives signals and measures and collects relevant parameters in real-time at
The general spectrum allocation problem is to allocate optimal power to an SU’s request across spatial, frequency, and temporal domains. We focus on the core function approximation problem, which is to determine the optimal power allocation to an SU for a given location, channel, and time instant—since frequency and temporal domains are essentially “orthogonal” dimensions of the problem and thus can be easily handled independently (as done in §III-F). We thus assume a single channel and instant for now, and discuss multiple channels and request duration in §III-F.
C
The following result states that, under Assumption 1, if the stepsize at each iteration is chosen by the doubling trick scheme, there is an upper bound for the static regret defined in (4). Moreover, the upper bound has the order of O⁢(T)𝑂𝑇O(\sqrt{T})italic_O ( square-root start_ARG italic_T end_ARG ) for convex costs.
Suppose Assumption 1 holds. Furthermore, if the stepsize is chosen as αt=CTTsubscript𝛼𝑡subscript𝐶𝑇𝑇\alpha_{t}=\sqrt{\frac{C_{T}}{T}}italic_α start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = square-root start_ARG divide start_ARG italic_C start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT end_ARG start_ARG italic_T end_ARG end_ARG, the dynamic regret (5) achieved by Algorithm 1 satisfies
Suppose Assumptions 1 (i) and 2 hold. Furthermore, if the stepsize is chosen as αt=Pμ⁢tsubscript𝛼𝑡𝑃𝜇𝑡\alpha_{t}=\frac{P}{\mu t}italic_α start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = divide start_ARG italic_P end_ARG start_ARG italic_μ italic_t end_ARG. Then, the static regret (4) achieved by Algorithm 1 satisfies
Suppose Assumption 1 holds. Furthermore, if the stepsize is chosen as αt=CTTsubscript𝛼𝑡subscript𝐶𝑇𝑇\alpha_{t}=\sqrt{\frac{C_{T}}{T}}italic_α start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = square-root start_ARG divide start_ARG italic_C start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT end_ARG start_ARG italic_T end_ARG end_ARG, the dynamic regret achieved by the online gradient descent algorithm (32) satisfies
Suppose Assumption 1 holds. Furthermore, if the stepsize is chosen according to Definition 1. Then, the static regret (4) achieved by Algorithm 1 satisfies
D
In 2015, Bar et al. (2015) used a pre-trained image classifier for classifying pathologies in chest radiographs, demonstrating the feasibility of detecting X-ray pathologyDonahue et al. (2014). In 2017, Cicero et al. (2017) presented a similar CNN classifier that achieved an AUC of 0.964 using a medium-sized dataset of 35,000 X-rays annotated by 2443 radiologists. The authors achieved an overall sensitivity and specificity of 91% using GoogleNet Szegedy et al. (2015). Maduskar et al. (2013) evaluated the performance of CNNs in tuberculosis detection using a small dataset of 1007 chest X-rays. They experimented with pretrained and untrained versions of two architectures, AlexNet Krizhevsky et al. (2012) and GoogleNet Szegedy et al. (2015), and obtained the best performance with an ensemble of both architectures in the pretrained condition (AUC = 0.99). The pretrained models consistently outperformed the untrained models. Similarly, Lakhani and Sundaram (2017) compared the performance of a computer-aided tuberculosis diagnosis system (CAD4TB) with that of health professionals and found that the tuberculosis assessment of CAD4TB was comparable to that of health officers. In 2016, Wang et al. (2017a) proposed weakly controlled multi-label classification and localization of thoracic diseases using deep learning. In 2017, Rajpurkar et al. (2017) designed a deep learning model called CheXNet, which utilized a 121-layer CNN with dense connections and batch normalization to detect pneumonia. The model was trained on a publicly available dataset of 100,000 chest X-ray images and outperformed the average radiologist performance. Bar et al. (2018) used a pretrained model on a non-medical dataset and fine-tuned it on pathology features for disease identification. Dasanayaka and Dissanayake (2021) presented deep learning-based segmentation techniques to detect pulmonary tuberculosis. Patel and Kashyap (2023) utilized the Littlewood-Paley Empirical Wavelet Transform (LPEWT) to decompose lung images into sub-bands and extract robust features for lung disease detection. Deep learning has also been extensively applied in the detection of COVID-19Bhuyan et al. (2022); Farooq and Hafeez (2020); Yang et al. (2020); Li et al. (2020); Pushparaj et al. (2022); Irene D and Beulah (2022); Dhruv et al. (2023).
Limitations: Most disease prediction models focus on single-label classification, where the model only detects the presence of a single pathology. However, multi-label disease classification can offer several advantages over single-label classification. Multi-label diagnosis is akin to realistic representation since, in clinical practice, it’s common for patients to have multiple medical conditions. Multi-label classification allows a single instance (e.g., an x-ray image) to be associated with multiple disease labels. This provides a more comprehensive view of the patient’s health, as many patients may suffer from multiple medical conditions simultaneously. Single-label classification may force a medical professional to decide which disease is the “primary” one when a patient has multiple conditions. This can lead to information loss, as secondary conditions may be overlooked. A multi-label classification doesn’t require this decision and captures all relevant conditions.
In Table 3 and Table 4, we compare the performance of our proposed model against single and multi-label prediction models for selected pathologies. Table 3 shows that our proposed multi-label approach was able to outperform single-label models. In Table 4, the results indicate that our proposed architecture outperforms Wang et al. Wang et al. (2017b) and Irvin et al. Irvin et al. (2019) in multiple detection whereas betters performance of CheXNext Rajpurkar et al. (2018), which is the state-of-the-art chest x-ray disease prediction model, for cardiomegaly condition only.
Given a medical image of a patient as input, a disease prediction system provides the probability of the occurrence of a disease. This approach represents a single-label classification problem. Examples of such diagnoses include diabetic retinopathy in eye fundus images, skin cancer in skin lesion images, and pneumonia in chest X-rays (Figure 3, Figure 3, and Figure 3). However, in certain cases, multi-label prediction becomes crucial as it provides the probabilities of multiple pathologies occurring within the same medical image. This is particularly important when there are possibilities of more than one disease being present.
Most existing studies on disease diagnosis using chest X-rays primarily focus on detecting a single pathology, such as pneumonia or COVID-19 (Bar et al. (2015); Cicero et al. (2017); Rajpurkar et al. (2017); Dasanayaka and Dissanayake (2021); Hussain et al. (2023)). However, an X-ray image can exhibit multiple pathological conditions simultaneously. Detecting multiple pathologies can provide a comprehensive view of the patient’s health from a single image. Single-label classifications may produce false negatives when patients have multiple diseases, as they focus solely on the primary condition. Multi-label classification can help reduce false negatives by identifying secondary or co-occurring diseases. Multi-label classification can also be valuable in epidemiological studies and public health research. It can provide insights into the prevalence and co-occurrence of diseases in specific populations, aiding in resource allocation and healthcare planning. In this research, we employ a 121-layer DenseNet architecture to perform diagnostic predictions for 14 distinct pathological conditions in chest X-rays. Additionally, we utilize the GRADCAM explanation method to localize specific areas within the chest radiograph to visualize the regions to which the model paid attention to make disease predictions, enhancing our understanding of the model’s predictions. The detection of these 14 different pathology conditions, including ‘Atelectasis’, ‘Cardiomegaly’, ‘Consolidation’, ‘Edema’, ‘Emphysema’, ‘Effusion’, ‘Fibrosis’, ‘Hernia’, ‘Infiltration’, ‘Mass’, ‘Nodule’, ‘Pneumothorax’, ‘Pleural Thickening’, and ‘Pneumonia’, presents a multi-label classification problem. The input to the DenseNet architecture is a chest X-ray image; the output is a label that provides the probability of each pathology being present in the X-ray. The code for our approach is available on Github111https://github.com/dipkamal/chestxrayclassifier.
A
A discrete emotion out of a total of 12121212 (joy, sadness, surprise, contempt, hope, fear, attraction, disgust, tenderness, anger, calm, and tedium) [21].
Physiological signals [17]: BVP, GSR, and SKT physiological signals captured during the experimentation by the BioSignalPlux research toolkit are provided in a binary MATLAB® file (.mat). It contains a cell array with 100100100100 rows (one per volunteer) and 14141414 columns (one per video). Each cell contains four fields: volunteer identifier, clip or trial identifier, filtering indicator, and an inner cell array (with the physiological data associated with that specific clip and volunteer).
The signals being released are the ones acquired by the BioSignalPlux research toolkit. Specifically, the raw and filtered BVP, GSR, and SKT signals captured during every video visualization are provided. The preprocessing is as follows:
Additionally, two in-house sensory systems are employed. On the one hand, the Bindi’s bracelet [28] measures dorsal wrist BVP, ventral wrist GSR, and forearm SKT. The hardware and software particularities of this device are detailed in [29, 30, 31]. The previously mentioned BioSignalPlux toolkit is employed as a golden standard to analyze the performance of its sensors due to its experimental nature. BVP and GSR signals from BioSignalPlux and Bindi were successfully compared and correlated with Bindi in [30] and [31]. On the other hand, a GSR sensor to be integrated into the next version of the Bindi bracelet is used. Its hardware and software particularities are detailed in [32].
The BioSignalPlux333https://biosignalsplux.com/products/kits/researcher.html research toolkit system. It is a commonly used device to acquire different physiological signals in the literature [23, 24, 25, 26]. More specifically, we capture finger Blood Volume Pulse (BVP), ventral wrist Galvanic Skin Response (GSR), forearm Skin Temperature (SKT), trapezoidal Electromyography (EMG), chest respiration (RESP), and inertial wrist movement through an accelerometer.
D
We have made available an online system with this trained network so that anyone can use it and test it, simply by uploading images. The software automatically labels the images as positive or negative to AMD. We have also provided the source code of the entire software and it is available publicly to facilitate researchers to use this as it is, or improve it. We are focused on fostering partnerships to facilitate and conduct research towards the usage of deep-learning to generate and recognize medical images.
Figure 5 provides examples of real and synthetic images that are from eyes, positive and negative to AMD. One can observe the high-quality images that were generated for both, AMD and non-AMD images.
We have made the source code for generating the synthetic images publicly available to facilitate joint research in the field. We have also provided free access through this paper for the online use of the AMD detection model. This will facilitate future work to broaden the scope for detecting the severity of AMD, and for differentiating from other diseases. For generating synthetic medical images, there is the need to consider a broader range of deep architectures and the effectiveness of heatmaps helping the clinicians.
Evaluating the quality of synthetic images is important for establishing their usability in practical applications, such as training deep learning models. It can significantly influence the training of these models. If the data does not accurately represent reality or lacks diversity, the synthetic data may introduce noise into the training, decreasing the model performance.
The potential of diffusion models [63], known for their advanced capabilities in generating high-quality and diverse images, presents exciting future research in AMD and other ophthalmology diagnoses. These models should be considered for future development.
D
Our approach parallels the development in 17, where we addressed the approximation of model predictive control policies for deterministic systems. We ask whether the training of a ReLU-based neural network to approximate a controller Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) has been sufficient to ensure that the network’s output function ΦNN⁢(⋅)subscriptΦNN⋅\Phi_{\textrm{NN}}(\cdot)roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( ⋅ ) will be stabilizing for (1). Our approach is based on the offline characterization of the error function e⁢(x)≔ΦNN⁢(x)−Φ⁢(x)≔𝑒𝑥subscriptΦNN𝑥Φ𝑥e(x)\coloneqq\Phi_{\textrm{NN}}(x)-\Phi(x)italic_e ( italic_x ) ≔ roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( italic_x ) - roman_Φ ( italic_x ) using mixed-integer (MI) optimization, where Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) is a continuous PWA law defined using any of (3), (4) or (5) (as we show in §4).
Our approach parallels the development in 17, where we addressed the approximation of model predictive control policies for deterministic systems. We ask whether the training of a ReLU-based neural network to approximate a controller Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) has been sufficient to ensure that the network’s output function ΦNN⁢(⋅)subscriptΦNN⋅\Phi_{\textrm{NN}}(\cdot)roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( ⋅ ) will be stabilizing for (1). Our approach is based on the offline characterization of the error function e⁢(x)≔ΦNN⁢(x)−Φ⁢(x)≔𝑒𝑥subscriptΦNN𝑥Φ𝑥e(x)\coloneqq\Phi_{\textrm{NN}}(x)-\Phi(x)italic_e ( italic_x ) ≔ roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( italic_x ) - roman_Φ ( italic_x ) using mixed-integer (MI) optimization, where Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) is a continuous PWA law defined using any of (3), (4) or (5) (as we show in §4).
The first quantity is precisely of the type required to apply the stability result of §3.2, thus supplying a condition on the optimal value of an MILP sufficient to certify the uniform ultimate boundedness of the closed-loop system (1) under the action of ΦNN⁢(⋅)subscriptΦNN⋅\Phi_{\textrm{NN}}(\cdot)roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( ⋅ ), obtained by suitably training a ReLU network to replicate Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ).
By analyzing the results in Tab. 3 – specifically, by contrasting the third and fourth column – we notice that we have always succeeded in the design of a minimum complexity, stabilizing ReLU-based surrogate ΦNN⁢(⋅)subscriptΦNN⋅\Phi_{\textrm{NN}}(\cdot)roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( ⋅ ) of Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) in (10) for all the considered cases, i.e., Ex. (a)–(j). In particular, the resulting values for e¯∞subscript¯𝑒\bar{e}_{\infty}over¯ start_ARG italic_e end_ARG start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT suggest that the neighbourhood of the origin we are assured to reach with ΦNN⁢(⋅)subscriptΦNN⋅\Phi_{\textrm{NN}}(\cdot)roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( ⋅ ) can be made very small in practise, certifying to ultimately bounding the system state in a set up to 99.02%percent99.0299.02\%99.02 % smaller than the original volume of the control invariant set 𝒮𝒮\mathcal{S}caligraphic_S (values for b𝑏bitalic_b, last column). Note that the obtained results can, in principle, be further improved by adding extra layers or neurons in the architecture underlying ΦNN⁢(⋅)subscriptΦNN⋅\Phi_{\textrm{NN}}(\cdot)roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( ⋅ ) – this may come at the price of slightly increasing both the training time and the time required for computing e¯∞subscript¯𝑒\bar{e}_{\infty}over¯ start_ARG italic_e end_ARG start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT.
We will obtain a condition on the optimal value of \pglsMILP sufficient to assure that the closed-loop system (1) under the action of ΦNN⁢(⋅)subscriptΦNN⋅\Phi_{\textrm{NN}}(\cdot)roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( ⋅ ) is (uniformly) ultimately bounded within a set of adjustable size and (exponential) convergence rate, according to the following notion:
D
Specifically, the E𝐸Eitalic_E-verifier can be used to obtain, with polynomial complexity, one necessary and one sufficient condition for C𝐶Citalic_C-enforceability; in case that the sufficient condition is satisfied, the trimmed version of the E𝐸Eitalic_E-verifier leads to a strategy to enforce concealability, also with polynomial complexity.
These developments should be contrasted against constructions with exponential complexity [12] (the latter, however, provide a necessary and sufficient condition).
Specifically, the E𝐸Eitalic_E-verifier can be used to obtain, with polynomial complexity, one necessary and one sufficient condition for C𝐶Citalic_C-enforceability; in case that the sufficient condition is satisfied, the trimmed version of the E𝐸Eitalic_E-verifier leads to a strategy to enforce concealability, also with polynomial complexity.
It is worth mentioning that the focus of this paper is on the use of reduced complexity constructions (with polynomial complexity) to provide one necessary condition and one sufficient condition for C𝐶Citalic_C-enforceability.
Taking advantage of the special structure of the concealability problem, we propose a verifier-like structure of polynomial complexity to obtain one necessary condition and one sufficient condition for enforceability of the defensive function with polynomial complexity.
A
In this section we review typical loss functions used in image registration, and analyze the related requirements for privacy-preserving optimization.
Since the registration gradient is generally driven mainly by a fraction of the image content, such as the image boundaries in the case of SSD cost, a reasonable approximation of Equations (4) and (6) can be obtained by evaluating the cost only on relevant image locations.
The loss f𝑓fitalic_f can be any similarity measure, e.g., the Sum of Squared Differences (SSD), the negative Mutual Information (MI), or normalized cross correlation (CC).
A typical loss function to be optimized during the registration process is the sum of squared intensity differences (SSD) evaluated on the set of image coordinates:
Thanks to the privacy and security guarantees of these cryptographic tools, during the entire registration procedure, the content of the image data S𝑆Sitalic_S and J𝐽Jitalic_J is never disclosed to the opposite party.
C
1})over^ start_ARG italic_b end_ARG start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_τ start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) ← over^ start_ARG blackboard_P end_ARG start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT ( italic_τ start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ).
12:     Update the confidence set 𝒞tsuperscript𝒞𝑡\mathcal{C}^{t}caligraphic_C start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT by (4.4).
To conduct optimistic planning, we seek for the policy that maximizes the return among all parameters θ∈𝒞t𝜃superscript𝒞𝑡\theta\in\mathcal{C}^{t}italic_θ ∈ caligraphic_C start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT and the corresponding features. The update of policy takes the following form,
\in\mathcal{C}^{t}}V^{\pi}(\theta),italic_π start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT ← roman_argmax start_POSTSUBSCRIPT italic_π ∈ roman_Π end_POSTSUBSCRIPT roman_max start_POSTSUBSCRIPT italic_θ ∈ caligraphic_C start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT end_POSTSUBSCRIPT italic_V start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_θ ) ,
}^{t}}V^{\pi}(\theta)italic_π start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT ← roman_argmax start_POSTSUBSCRIPT italic_π ∈ roman_Π end_POSTSUBSCRIPT roman_max start_POSTSUBSCRIPT italic_θ ∈ caligraphic_C start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT end_POSTSUBSCRIPT italic_V start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_θ ).
A
𝒪Y1,B′=0subscript𝒪subscript𝑌1superscript𝐵′0\mathcal{O}_{Y_{1},B^{\prime}}=0caligraphic_O start_POSTSUBSCRIPT italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_B start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT = 0.
to be a variant that returns the set of columns Y1subscript𝑌1Y_{1}italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and the set of
Y1subscript𝑌1Y_{1}italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, with the notations of the above lemma.
R[K]subscript𝑅delimited-[]KR_{\rm[K]}italic_R start_POSTSUBSCRIPT [ roman_K ] end_POSTSUBSCRIPT, with the notations of lem. 61.
notations and hypotheses as in lemma 53, with A:=AΣassign𝐴subscript𝐴ΣA:=A_{\Sigma}italic_A := italic_A start_POSTSUBSCRIPT roman_Σ end_POSTSUBSCRIPT,
B
\mathcal{H}^{T}\mathcal{H})=\frac{1}{2k+2}italic_λ start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT ( italic_b ( italic_k ) over^ start_ARG caligraphic_L end_ARG start_POSTSUBSCRIPT caligraphic_G end_POSTSUBSCRIPT + italic_a ( italic_k ) caligraphic_H start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT caligraphic_H ) = divide start_ARG 1 end_ARG start_ARG italic_k + 1 end_ARG italic_λ start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT ( over^ start_ARG caligraphic_L end_ARG start_POSTSUBSCRIPT caligraphic_G end_POSTSUBSCRIPT + caligraphic_H start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT caligraphic_H ) = divide start_ARG 1 end_ARG start_ARG 2 italic_k + 2 end_ARG. Then, the condition (i) holds with h=1ℎ1h=1italic_h = 1 and ∑k=0∞Λkh=∑k=0∞12⁢k+2=∞superscriptsubscript𝑘0superscriptsubscriptΛ𝑘ℎsuperscriptsubscript𝑘012𝑘2\sum_{k=0}^{\infty}\Lambda_{k}^{h}=\sum_{k=0}^{\infty}\frac{1}{2k+2}=\infty∑ start_POSTSUBSCRIPT italic_k = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT roman_Λ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT = ∑ start_POSTSUBSCRIPT italic_k = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT divide start_ARG 1 end_ARG start_ARG 2 italic_k + 2 end_ARG = ∞.
For the special case without regularization, we directly obtain the following corollary by Theorem 1.
The convergence and performance analysis of the algorithm (6) are presented in this section. First, Lemma 1 gives a nonnegative supermartingale type inequality of the squared estimation error. Based on which, Theorem 1 proves the almost sure convergence of the algorithm. Then, Theorem 2 gives intuitive convergence conditions for the case with balanced conditional digraphs by Lemma 2. Whereafter, Corollary 2 gives more intuitive convergence conditions for the case with Markovian switching graphs and regression matrices. Finally, Theorem 3 establishes an upper bound for the regret of the algorithm by Lemma 3, and Theorem 4 gives a non-asymptotic rate for the algorithm. The proofs of theorems, Proposition 1 and Corollary 2 are in Appendix A, and those of the lemmas in this section are in Appendix B.
Then, we give intuitive convergence conditions for the case with balanced conditional digraphs. We first introduce the following definitions.
Whereafter, we give more intuitive convergence conditions for the case with Markovian switching graphs and regression matrices. We first make the following assumption.
A
Graph signal variations can also be computed in ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-norm as graph total variation (GTV) [10, 11].
Graph signal variations can also be computed in ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-norm as graph total variation (GTV) [10, 11].
Though convex, minimization of ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-norm like GTV requires iterative algorithms like proximal gradient (PG) [24] that are often computation-expensive.
Its generalization, total generalized variation (TGV) [17, 18], better handles the known staircase effect, but retains the non-differentiable ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-norm that requires iterative optimization.
Total variation (TV) [16] was a popular image prior due to available algorithms in minimizing convex but non-differentiable ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-norm.
B
In summary, our simulation study showed that DL-based methods can be used for MR image re-parameterization. Based on our preliminary results, we suggest that DL-based methods hold the potential to generate via simulations MR imaging scans with a new set of parameters.
In summary, our simulation study showed that DL-based methods can be used for MR image re-parameterization. Based on our preliminary results, we suggest that DL-based methods hold the potential to generate via simulations MR imaging scans with a new set of parameters.
Future work can focus on varying larger number of acquisition parameters. This approach could also be utilized for T1/T2 mapping, based on the availability of sufficient training data.
Brainweb is a simulated brain database that contains a set of realistic MRI data volumes produced by an MRI Simulator. We used this tool to generate test scans in 5 different parameter settings. The results can be seen in Figures 6 and 6 for both models. The evaluation metrics on this test-set can be found in Table 2.
In our work, we propose a coarse-to-fine fully convolutional network for MR image re-parameterization mainly for Repetition Time (TR) and Echo Time (TE) parameters. As the model is coarse-to-fine, we use image features extracted from an image reconstruction auto-encoder as input instead of directly using the raw image. This technique makes the proposed model more robust to a potential overfitting. Based on our preliminary experiments, DL-based methods hold the potential to simulate MRI scans with a new set of parameters. Our deep learning model also performs the task considerably faster than simple biophysical models. To generate our data, we rely on MRiLab [7] which is a conventional MR image simulator. Source code is publicly available at https://github.com/Abhijeet8901/Deep-Learning-Based-MR-Image-Re-parameterization.
B
1) To the best of our knowledge, this design represents the first real-time photon counting receiver implementation on a conventional SiPM and an FPGA, enhancing its potential for IoT applications compared to previous offline approaches [10],[11], [12], [26], [27].
In this paper, we have demonstrated a novel real-time SiPM-based receiver with a low bit rate and high sensitivity, which has the potential for low transmitter power consumption. The work provides the evaluations of the analog chain of the receiver to show the potential for lower power consumption. The numerical simulation proves that the required power consumption of the amplifier is approximately 50 mW at 120 MHz GBP. In addition, to further reduce the complexity and power consumption in the digital circuit design, the FPGA implemented an asynchronous photon detection method. Finally, the implementation of interleaved counters in the receiver allows it to receive streaming data without dead time. This design is being implemented on an FPGA and conventional SiPM for the first time to the best of our knowledge, making it more beneficial for utilizing SiPM in IoT applications than previous offline approaches.
To optimize the real-world performance of the real-time SiPM-based receiver for IoT applications, the power consumption of its components was measured. Table II presents the power consumption measurements for the prototyped receiver under a data rate of up to 1 Mbps. It is observed that the SiPM’s power consumption increases with an increase in data rate. This is because the current within the SiPM originates from electrons excited by the detected photons, maintaining a proportionate relationship with the incident light. The ability to achieve a higher data rate depends on detecting more photons. In the meantime, the measured power consumption of the evaluation board was considerably higher than that of the designed circuit due to numerous unused peripheral interfaces, advanced reduced instruction set computer machine (ARM) core, and FPGA sources during the board’s power-up process. To evaluate the power consumption of the designed receiver circuit, separate measurements were taken for the Xilinx ZYNQ 7000 FPGA, first with only the transmitter PRBS generator and then with both the transmitter and receiver implemented. The difference in these values gives an estimate of the power consumption of the digital circuit of the receiver, which is 36 mW. Among the receiver components, the three amplifiers consume the highest amount of power, which is approximately 2 W. Therefore, analyzing the power consumption of the amplifiers should be a focus in sections V and VI.
2) By conducting numerical simulations, this study assessed the GBP of the post-readout circuit within the SiPM-based optical receiver. This assessment complements previous research findings and offers insights into the circuit’s suitability for future low-power consumption applications.
The previous section designed the receiver based on the ideal setup to investigate the SiPM performance. However, the receiver components often contain amplifier blocks and lowpass or bandpass hardware filters, which affect the shape of the SiPM output pulses to the FPGA. To ensure the best transmission performance of the SiPM pulses, three high GBP amplifier blocks were used in the real-time experiments. However, these high-performance amplifiers also increase the receiver’s power consumption, a disadvantage, especially in IoT applications. When an amplifier is selected, the factors such as bandwidth, slew rate and power consumption should be considered. For a single-pole response voltage feedback amplifier, the product of the DC gain and the bandwidth is constant, which has a trade-off with power consumption [38]. In order to minimize the power consumption of the receiver, the effect of the receiver’s GBP on the BER was investigated. Since changing the GBP of each amplifier is not practical due to experimental limitations, the rest of the investigation uses the numerical simulation based on the offline processing method in section II. The captured sample waveforms from the oscilloscope were filtered through a first-order Butterworth low pass filter (LPF) implemented in software with a bandwidth below 1 GHz.
C
Suppose we extrapolate the ≈\approx≈0.05 m/s spent by the spacecraft in the Hohmann-like transfer plus orbital maintenance in the 800 m orbit (tighter than the tightest 1 km orbit of OSIRIS-REx [54]). In that case, the spacecraft could still orbit Bennu, and make similar orbital transfers, for about 227 days before reaching the 9 m/s best scenarios of Takahashi & Scheeres [46]. The point here is not to advise the use of this paper’s exact architecture and mission profile. Instead, it shows that a fully autonomous operation opens new possibilities for asteroid exploration. It is a paradigm shift in the current conservative approach of severely constraining uncertainties before close-proximity.
It is also crucial to emphasize that the comparison of these magnitudes with the OSIRIS-REx mission and other missions hereafter serves only to provide a notion of the order of magnitude of the Δ⁢VΔ𝑉\Delta Vroman_Δ italic_V budget in real mission cases. The intention is only to showcase that the architecture proposed in this study aligns well with the values expected within a similar kind of mission within the current paradigm. Of course, real missions have a lot more requirements, including very strict scientific requirements, that may impose a high burden in terms of the Δ⁢VΔ𝑉\Delta Vroman_Δ italic_V budget.
In addition to these benefits, and more importantly, an autonomous and rapid approach to exploration can shape current scientific asteroid missions to be more cost-effective and time-efficient. Current missions have a conservative and cautious operational profile, often taking months of surveying and slowly approaching the target to constrain the uncertainties to very low levels before the primary goal of the mission [48, 54]. For instance, the OSIRIS-Rex mission took about four months to approach and make a preliminary survey of the asteroid Bennu before being inserted into its first orbit. The preliminary survey had approximately 20 days, in which the spacecraft made multiple flybys at a distance of roughly 7 km to reduce the uncertainty in the asteroid’s mass to 2% before a safe insertion into orbit [54].
We would like to emphasize that our intention is not to advocate for a universal approach of “rapid exploration” in all asteroid missions. Instead, our objective is to illustrate the lack of necessity in minimizing uncertainties to an excessively low level for autonomous robotic spacecraft. We aim to demonstrate that autonomous robotic spacecraft possess the capability to effectively handle uncertainties, thus reducing the time spent solely on uncertainty reduction for navigation purposes. We fully recognize the significance of prolonged periods dedicated to sensor and hardware testing, calibration, detecting contingencies, extensive imaging from various phase angles, and other critical activities.
Well-designed guidance and control laws can allow an autonomous spacecraft to have a bolder operation, even with a higher level of uncertainty in the navigation. On top of that, there is not a significant compromise in budget Δ⁢VΔ𝑉\Delta Vroman_Δ italic_V as one could expect. Therefore, a fully autonomous mission in close-proximity might not need a long 20 days preliminary survey phase like the OSIRIS-REx mission, and its 94 days approach phase could be potentially shortened [54]. It is important to note that a real mission involves various additional requirements, beyond reducing uncertainties to a very low level, that impact the time expenditure during the preliminary survey and approach. However, from a GN&C perspective, our study indicates that there is no indication that autonomous spacecraft studies should follow these same approach times to reduce uncertainties to a very low level.
D
Consider a multirotor UAV with an antenna on the top surface (i.e. the UAV’s surface facing the sky) that is communicating with a ground node. Assume that the UAV moves away from the node. To do this, the multirotor UAV has to tilt in such a way that its bottom surface (i.e. the UAV’s surface facing the ground) is slightly orientated towards the ground node, see Fig. 3. This can fully or partially block the LoS between the antennas of the ground node and of the UAV. In the case of fixed-wing UAVs, airframe shadowing can occur when the UAVs turn. In turning, they usually change their roll by controlling their ailerons. During this manoeuvre, one wing tilts up and the other tilts down. This tilting might temporarily block the LoS with other communication nodes. The airframe shadowing severity, for both types of UAVs, depends on the airframe or wings material, its size, its shape, antenna location on the UAV’s frame, and UAV trajectories. This phenomenon has been observed in practice; but, as mentioned in [95, 96], it has not yet been fully studied.
iii. Mathematical model available: in this case, we only dispose of a mathematical model of the communications channel. In our previous work [4], we considered the problem of a multirotor UAV that must reach some goal while transmitting data to a BS. The only information about the communications channel used for solving the communications-aware trajectory planning was the pathloss model and the p.d.f. of the shadowing. In [119], the authors considered the problem of optimizing the position of a UAV operating as a BS. To solve this, they considered the pathloss model, which is complemented by the Probability Mass Function (p.m.f.) of the LoS. In [120], we considered the problem of mitigating the small-scale fading in an MR communications link by leveraging the knowledge of its p.d.f. and spatial correlation. ∎
where Thovermsuperscriptsubscript𝑇hover𝑚T_{\rm hover}^{m}italic_T start_POSTSUBSCRIPT roman_hover end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT is the time that the UAV hovers over the m𝑚mitalic_mth HL, which depends on the number of HLs, the channel capacity, and the probability of receiving a successful transmission, see [162] for the details; Ttravelsubscript𝑇travelT_{\rm travel}italic_T start_POSTSUBSCRIPT roman_travel end_POSTSUBSCRIPT is the time that the UAV spends in motion which depends mainly on 𝐋𝐋\mathbf{L}bold_L and on 𝐙𝐙\mathbf{Z}bold_Z. In summary, this CaTP problem takes the following form:
Consider a multirotor UAV with an antenna on the top surface (i.e. the UAV’s surface facing the sky) that is communicating with a ground node. Assume that the UAV moves away from the node. To do this, the multirotor UAV has to tilt in such a way that its bottom surface (i.e. the UAV’s surface facing the ground) is slightly orientated towards the ground node, see Fig. 3. This can fully or partially block the LoS between the antennas of the ground node and of the UAV. In the case of fixed-wing UAVs, airframe shadowing can occur when the UAVs turn. In turning, they usually change their roll by controlling their ailerons. During this manoeuvre, one wing tilts up and the other tilts down. This tilting might temporarily block the LoS with other communication nodes. The airframe shadowing severity, for both types of UAVs, depends on the airframe or wings material, its size, its shape, antenna location on the UAV’s frame, and UAV trajectories. This phenomenon has been observed in practice; but, as mentioned in [95, 96], it has not yet been fully studied.
The communications channel gain depends on the relative orientation of the transmitting and receiving antennas. During the flying phase, a multirotor UAV must tilt, thus changing its antenna orientation. As a consequence, the communication channel observed when a multirotor UAV hovers is different than when they move [99], see Fig. 4. Furthermore, the contribution on the antenna channel gain will vary with the motion of the UAV, see [98] for more details. Similarly, during turning manoeuvres, a fixed-wing UAV has to tilt, thus changing its antenna orientation, see Fig. 5. The communications channel observed when fixed-wing UAVs move on a straight line is different than when they are turning. We also note that the location and orientation of the antenna on the UAV has a significant impact on the communications channel, as shown experimentally in [100, 101, 102, 103].
D
In the case where Σ2subscriptΣ2\Sigma_{2}roman_Σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is static or stability of x2∗=0superscriptsubscript𝑥20x_{2}^{*}=0italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = 0 is of no concern, the dissipativity conditions (i)-(iv) in Theorem 20 for Σ2subscriptΣ2\Sigma_{2}roman_Σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT can be simplified by omitting x2subscript𝑥2x_{2}italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT as in (6) and restricting 𝒳𝒳\mathcal{X}caligraphic_X to be 𝒳1subscript𝒳1\mathcal{X}_{1}caligraphic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT in Assumption 12 or 14 and Theorem 20. In this case, stability of x1∗=0superscriptsubscript𝑥10x_{1}^{*}=0italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = 0 may be established with S⁢(x1,z)=S1⁢(x1,z1)+S2⁢(z2)𝑆subscript𝑥1𝑧subscript𝑆1subscript𝑥1subscript𝑧1subscript𝑆2subscript𝑧2S(x_{1},z)=S_{1}(x_{1},z_{1})+S_{2}(z_{2})italic_S ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_z ) = italic_S start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_z start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) + italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) by looking at the closed-loop map from w1subscript𝑤1w_{1}italic_w start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT to y1subscript𝑦1y_{1}italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT.
Interestingly, asymptotic stability of the feedback system may be established using a type of strict dissipativity where the strictness is derived from
Feedback stability in the sense of Lyapunov often leaves much to be desired. Next, we examine the stronger notion of asymptotic feedback stability via dissipativity.
IQCs, whereas the dynamics of the auxiliary system facilitate the verification of the dissipativity of the system with respect to the supply rate in
of dissipativity so that the stronger notion of asymptotic stability of Σ1∥Σ2conditionalsubscriptΣ1subscriptΣ2\Sigma_{1}\|\Sigma_{2}roman_Σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∥ roman_Σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT can be established. It is worth noting that
A
For a stochastic system, a subset of the state space is generally hard to be (almost sure) invariance because the diffusion coefficient is required to be zero at the boundary of the subset111The detail is discussed in[18], which aims to make the state of a stochastic system converge to the origin with probability one and confine the state in a specific subset with probability one. The aim is a little like the aim of a control barrier function. Tamba et al. make a similar argument for CBFs in [19], but their sufficient condition is more stringent.. To avoid the tight condition for the coefficient, we should design a state-feedback law whose value is massive, namely diverge in general, at the boundary of the subset so that the effect of the law overcomes the disturbance term. Moreover, a functional ensuring the (almost sure) invariance of the subset probably diverges at the boundary of the set as with a global stochastic Lyapunov function [22, 23, 24] and an RCBF.
On the other hand, the CBF approach is closely related to a control Lyapunov function (CLF), which immediately provides a stabilizing control law from the CLF, as in Sontag [16] for deterministic systems and Florchinger [17] for stochastic systems. Therefore, in the CBF approach, the derivation of a safety-critical control law immediately from the CBF is also important. For this discussion, the problem setting in which the safe set is coupled with the CBF is appropriate, as in Ames et al. [2]. The stochastic version of the Ames’s et al.’s result is recently discussed by Clark [12]; he insists that his RCBF and ZCBF guarantee the safety of a set with probability one. At the same time, Wang et al. [13] analyze the probability of a time when the sample path leaves a safe set under conditions similar to Clark’s ZCBF. Wang et al. also claim that a state-feedback law achieving safety with probability one often diverges toward the boundary of the safe set; the inference is also obtained from the fact that the conditions for the existence of an invariance set in a stochastic system are strict and influenced by the properties of the diffusion coefficients [18]. This argument is in the line of stochastic viability by Aubin and Prato [20]. For CBFs, Tamba et al. [19] provides sufficient conditions for safety with probability one, which require difficult conditions for the diffusion coefficients. Therefore, we need to reconsider a sufficient condition of safety with probability one, and we also need to rethink the problem setup to compute the safety probability obtained by a bounded control law.
The above discussion also implies that if a ZCBF is defined for a stochastic system and ensures “safety with probability one,” the good robust property of the ZCBF probably gets no appearance. The reason is that the related state-feedback law generally diverges at the boundary of the safe set. Hence, the previous work in [13] proposes a ZCBF with analysis of exit time of
In Section 4, first, we propose an AS-RCBF and an AS-ZCBF ensuring the invariance of a safe set with probability one. Second, we design a safety-critical control ensuring the existence of an AS-RCBF and an AS-ZCBF and show that the controller diverges towards the boundary of a safe set. Third, we construct a new type of a stochastic ZCBF clarifying a probability for the invariance of a safe set and showing the convergence of a specific expectation related to the attractiveness of a safe set from the outside of the set.
For a stochastic system, a subset of the state space is generally hard to be (almost sure) invariance because the diffusion coefficient is required to be zero at the boundary of the subset111The detail is discussed in[18], which aims to make the state of a stochastic system converge to the origin with probability one and confine the state in a specific subset with probability one. The aim is a little like the aim of a control barrier function. Tamba et al. make a similar argument for CBFs in [19], but their sufficient condition is more stringent.. To avoid the tight condition for the coefficient, we should design a state-feedback law whose value is massive, namely diverge in general, at the boundary of the subset so that the effect of the law overcomes the disturbance term. Moreover, a functional ensuring the (almost sure) invariance of the subset probably diverges at the boundary of the set as with a global stochastic Lyapunov function [22, 23, 24] and an RCBF.
B
\approx\bar{\sigma}({\bf Z}_{\rm PI}(s))\approx-40~{}{\rm dB}=0.01over¯ start_ARG italic_σ end_ARG ( bold_Z start_POSTSUBSCRIPT roman_droop end_POSTSUBSCRIPT ( italic_s ) ) ≈ over¯ start_ARG italic_σ end_ARG ( bold_Z start_POSTSUBSCRIPT roman_GFM end_POSTSUBSCRIPT ( italic_s ) ) ≈ over¯ start_ARG italic_σ end_ARG ( bold_Z start_POSTSUBSCRIPT roman_PI end_POSTSUBSCRIPT ( italic_s ) ) ≈ - 40 roman_dB = 0.01 at 10 Hz in Fig. 3. When considering VSMs with reactive power droop control, virtual impedance, and damping enhancement, the reactance is 0.04 pu, 0.03 pu, and 0.02 pu, respectively, since σ¯⁢(𝐙GFM−QD⁢(s))≈−26⁢dB≈0.04¯𝜎subscript𝐙GFMQD𝑠26dB0.04\bar{\sigma}({\bf Z}_{\rm GFM-QD}(s))\approx-26~{}{\rm dB}\approx 0.04over¯ start_ARG italic_σ end_ARG ( bold_Z start_POSTSUBSCRIPT roman_GFM - roman_QD end_POSTSUBSCRIPT ( italic_s ) ) ≈ - 26 roman_dB ≈ 0.04, σ¯⁢(𝐙GFM−VI⁢(s))≈−30⁢dB≈0.03¯𝜎subscript𝐙GFMVI𝑠30dB0.03\bar{\sigma}({\bf Z}_{\rm GFM-VI}(s))\approx-30~{}{\rm dB}\approx 0.03over¯ start_ARG italic_σ end_ARG ( bold_Z start_POSTSUBSCRIPT roman_GFM - roman_VI end_POSTSUBSCRIPT ( italic_s ) ) ≈ - 30 roman_dB ≈ 0.03, and σ¯⁢(𝐙GFM−damp⁢(s))≈−32⁢dB≈0.02¯𝜎subscript𝐙GFMdamp𝑠32dB0.02\bar{\sigma}({\bf Z}_{\rm GFM-damp}(s))\approx-32~{}{\rm dB}\approx 0.02over¯ start_ARG italic_σ end_ARG ( bold_Z start_POSTSUBSCRIPT roman_GFM - roman_damp end_POSTSUBSCRIPT ( italic_s ) ) ≈ - 32 roman_dB ≈ 0.02 at 10 Hz in Fig. 3.
In this paper, to ensure the generality of the proposed approach, we consider GFM converters with different implementations, such as droop control, power synchronization control, and VSMs (w/wo reactive power droop control [23], virtual impedance [24], and damping enhancement [25, 26]). We focus on the voltage source behavior of GFM converters which helps improve the system’s small signal stability dominated by GFL converters.
Rather than changing the power network, we use GFM converters under power synchronization control or VSMs (w/wo reactive power droop control), respectively, to improve the power grid strength and stabilize the system according to Proposition IV.1. Fig. 8, Fig. 9, and Fig. 10 show the responses of the system with different capacity ratios γ𝛾\gammaitalic_γ under different GFM methods, respectively. There is a voltage disturbance from the infinite bus at t = 0.2 s (a voltage sag of 5% that lasts 10 ms). It can be seen that the damping ratio of the system is improved when a larger γ𝛾\gammaitalic_γ is adopted (i.e., with more GFM converters), and the system has satisfactory performance with γ=17.8%𝛾percent17.8\gamma=17.8\%italic_γ = 17.8 % or γ=21.4%𝛾percent21.4\gamma=21.4\%italic_γ = 21.4 % (aligned with Example 1). Furthermore, it can be confirmed that the γ𝛾\gammaitalic_γ under VSMs with reactive power droop control needs to be larger to achieve similar damping performance, compared with GFM converters under power synchronization control and VSMs without reactive power droop control.
We consider the scenario where the system is unstable with gSCR=gSCR0=1.1gSCRsubscriptgSCR01.1{\rm gSCR}={\rm gSCR}_{0}=1.1roman_gSCR = roman_gSCR start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 1.1 (i.e., γ=0𝛾0\gamma=0italic_γ = 0) at the 35 kV bus. The other settings for the power grid in Fig. 7 are the same as those described above. Fig. 12 shows the responses of the system with different capacity ratios γ𝛾\gammaitalic_γ. It can be seen that the damping ratio of the system is improved when a larger γ𝛾\gammaitalic_γ is adopted (i.e., with more GFM converters), and the system has satisfactory performance with γ=4.8%𝛾percent4.8\gamma=4.8\%italic_γ = 4.8 % (aligned with Example 2). To validate our analysis in Section II, Fig. 13 displays the responses of the system (active and reactive power of wind farm 1) under a voltage disturbance (a voltage sag of 5% at the infinite bus that lasts 1 ms), in which we change VSMs without reactive power droop control to GFL converters with constant AC voltage control (γ=4.8%𝛾percent4.8\gamma=4.8\%italic_γ = 4.8 %). It can be seen that the system became unstable if, instead of installing VSMs without reactive power droop control, one chooses to install GFL converters with constant AC voltage control. The reason behind is that even with constant AC voltage control, GFL converters can only exhibit 1D-VS behaviors due to its control structure and thus cannot enhance the power grid strength, as discussed in Section II. By comparison, VSMs without reactive power droop control have 2D-VS behaviors and can effectively enhance the power grid strength.
In this paper, to test the generality and effectiveness of the proposed approach when considering GFM converters under different implementations, we will consider power synchronization control and VSMs w/wo reactive power droop control in the analysis and simulation studies to quantify how they improve the small signal stability of the system, where VSMs without reactive power droop control belongs to the category of VSMs without additional control methods as mentioned above.
D
Table 7: Quantitative comparison (average PSNR/SSIM) with state-of-the-art approaches for tiny/light image SR on benchmark datasets (×\times×4). The best and second best performances are highlighted and underlined, respectively.
In Fig. 7, we also exhibit the visual results of several tiny/lightweight models on Urban100 (×\times×4). For img_078, the tiny and light models are tested with the patches framed by green and red boxes, respectively. Generally, MANs can restore the texture better and clearer than other methods.
To validate the effectiveness of our MAN, we compare our normal model to several SOTA classical ConvNets [58, 8, 59, 41, 40, 37]. We also add SwinIR [30] for reference. In Tab. 6, the quantitative results show that our MAN exceeds other convolutional methods to a large extent. The maximum improvement on PSNR reaches 0.69 dB for ×\times×2, 0.77 dB for ×\times×3, and 0.81 dB for ×\times×4. Moreover, we compare our MAN with SwinIR. For ×\times×2, our MAN achieves competitive or even better performance than SwinIR. The PSNR value on Manga109 is boosted from 39.92 dB to 40.02 dB. For ×\times×4, MAN is slightly behind SwinIR because the latter uses the ×\times×2 model as the pre-trained model. More importantly, MAN is significantly smaller than existing methods.
Overall study on components of MAN. In Tab. 2, we present the results of deploying the proposed components on our tiny and light networks. In general, the best performances are achieved by employing all proposed modules. Specifically, 0.25 dB and 0.29 dB promoting on Urban100 [18] can be observed in MAN-tiny and MAN-light, while the parameters and calculations increase negligibly. Among these components, the LKAT module and multi-scale mechanism are more important to enhance quality. Without any of them, the PSNR will drop by 0.09 dB. The GSAU is an economical replacement for MLP. It reduces 15K parameters and 3.6G calculations while bringing significant improvements across all datasets.
To verify the efficiency and scalability of our MAN, we compare MAN-tiny and MAN-light to some state-of-the-art tiny [12, 26, 56, 44, 27] and lightweight [19, 36, 52, 30, 57] SR models. Tab. 7 presents the numerical results that our MAN-tiny/light outperforms all other tiny/lightweight methods. Specifically, MAN-tiny exceeds second place by about 0.2 dB on Set5, Urban100, and Manga109, and around 0.07 dB on Set14 and BSD100. We also list EDSR-baseline [31] for reference. Our tiny model has less than 150K parameters but achieves a similar restoration quality with EDSR-baseline, which is 10×\times× larger than ours. Similarly, our MAN-light surpasses both CNN-based and transformer-based SR models. In comparison with IMDN (CNN) and SwinIR-light/ELAN-light (Transformer), our model leads by 0.66 dB/0.23 dB on Urban100 (×\times×4) benchmark. Moreover, our MAN-light is superior to traditional performance-oriented EDSR. In detail, the proposed model takes only 2% of the parameters and computations of EDSR while having high PSNR on all benchmarks.
D
For the system safety analysis, we are interested in computing the BRT of ℒ⁢(βL)ℒsubscript𝛽𝐿\mathcal{L}(\beta_{L})caligraphic_L ( italic_β start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT ) given dynamics in (1).
BRT is the set of states such that the system trajectories that start from this set will eventually reach the given target set despite the worst-case disturbance (or an exogenous, adversarial input more generally).
Backward Reachable Tube (BRT): the set of initial states of the system for which the agent acting optimally and under worst-case disturbances will eventually reach the target set ℒℒ\mathcal{L}caligraphic_L within the time horizon [t,T]𝑡𝑇[t,T][ italic_t , italic_T ] :
The BRT for this collision set corresponds to all the states from which the pursuer can drive the system trajectory into the collision set within the time horizon [t,T]𝑡𝑇[t,T][ italic_t , italic_T ], despite the best efforts of the evader to avoid a collision.
First, a target function l⁢(x)𝑙𝑥l(x)italic_l ( italic_x ) is defined whose sub-zero level set is the target set ℒℒ\mathcal{L}caligraphic_L, i.e. ℒ={x:l⁢(x)≤0}ℒconditional-set𝑥𝑙𝑥0\mathcal{L}=\{x:l(x)\leq 0\}caligraphic_L = { italic_x : italic_l ( italic_x ) ≤ 0 }. Typically, l⁢(x)𝑙𝑥l(x)italic_l ( italic_x ) is defined as a signed distance function to ℒℒ\mathcal{L}caligraphic_L. The BRT seeks to find all states that could enter ℒℒ\mathcal{L}caligraphic_L at any point within the time horizon and, therefore might be unsafe. This is computed by finding the minimum distance to ℒℒ\mathcal{L}caligraphic_L over time:
B
Another approach is to intentionally use broken (zig-zag) multi-hop trajectories to mislead the attacker or avoid risk areas.
The use of distributed antennas is a common approach to address the coverage issue. The fronthaul connection that is needed between the central node and the remote radio heads is highly challenging due to its high bandwidth and stringent latency requirements. It is generally implemented by an optical network. RIS-based wireless networks can be regarded as a more cost-effective alternative for implementing a distributed antenna system with integrated access and fronthaul. This can be enabled by the following distributed network components.
After highlighting several advantages of the directive RIS architecture, we shall discuss its disadvantages as compared to the reflective RIS configuration. In addition, to the need for a (metasurface) lens for analog DFT processing, the major issue is the need for longer RF interconnections (see Fig. 7) and a multistage-switching network for conductive RF routing which is in general quite challenging at high frequencies. Switching matrices are used in several applications such as satellite communications [37]. As the frequency and the number of ports increase, however, the losses of signal traces and switches become overwhelming, and designing a printed circuit board (PCB) layout with global interconnections and with minimal signal integrity issues is no easy task.
In practice, real-time reconfigurability in the range of milliseconds might be still difficult to achieve as it requires stringent timing requirements for the control channel. Alternatively, beam-hopping techniques that are popular in satellite communications [34] can be considered. Beam-hopping consists of serving sequentially users spots in turn according to a predetermined schedule. The periodic beam hopping time plan can be determined and updated based on the varying traffic demand and the RIS scattering pattern can be optimized based on long-term statistical channel information [35] which also reduces the training overhead (c.f. Section IV-A). Therefore, the reconfiguration needs to be done only occasionally with long cycle times and the requirements on the control channel are significantly relaxed. To allow for initial access, all potential beam directions are sequentially illuminated and scanned (beam sweeping) during multiple synchronization signal blocks (SSB). This results in substantial initial access latency and a long beam-hopping period. Therefore, the RIS node is designed to support a medium number of wide initial access wide beams or, alternatively, a permanent directive link is dedicated between the access point and the RIS node. While the control overhead is reduced, synchronous operation (for instance via GPS) between the RIS nodes and the donor nodes is still required. A notable advantage of the redirective RIS system is the simultaneous beam hopping of multiple beams at full aperture gain, particularly when the RIS node is shared among several donor sites (e.g. Fig 2) as explained in the next subsection.
We introduced the concept of nonlocal or redirective reconfigurable surfaces with low-rank scattering as an artificial wave-guiding structure for wireless wave propagation at high frequencies. We showed multiple functionalities that can be implemented, including beam bending, multi-beam data forwarding, wave amplification, routing, splitting, and combining. The observed results indicate that transformation-based intelligent surfaces can make mmWave and THz networks more salable, secure, flexible, and robust while being energy, cost, and spectrally efficient. Mitigating the coverage issue of these frequency bands can be considered a critical milestone in the evolution of terrestrial wireless mobile access. Other than the improved coverage, RIS-based remote nodes can also improve the network capacity due to the extremely high directional propagation and the possibility for massive spatial multiplexing with massive MIMO at the central macro baseband node. This enables tens or even hundreds of bits per hertz and kilometer square area spectral efficiency for mmWaves at low cost and high coverage. While lens-based RIS offers much better performance in terms of signal processing efficiency, its bulkiness (particularly in the case of 3D beamforming) and scalability issues (due to the longer RF interconnections and switching implementation) might be disadvantageous
D
In the VR display task, the central server transmits virtual 360∘superscript360360^{\circ}360 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT video streaming to the user. To avoid the transmission of the whole 360∘superscript360360^{\circ}360 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT video, the central server can predict the eye movements of the user and extract the corresponding FoV as goal-oriented semantic information. Apart from the PSNR and SSIM mentioned in AR, timing accuracy and position accuracy are also important effectiveness-aware performance metrics to avoid cybersickness including: 1) initial delay: time difference between the start of head motion and that of the corresponding feedback; 2) settling delay: time difference between the stop of head motion and that of the corresponding feedback; 3) precision: angular positioning consistency between physical movement and visual feedback in terms of degrees; and 4) sensitivity: capability of inertial sensors to perceive subtle motions and subsequently provide feedback to users.
Due to the difficulty in supporting massive haptic data with stringent latency requirements, JND can be identified as important goal-oriented semantic information to ignore the haptic signal that cannot be perceived by the manipulator. Two effectiveness-aware performance metrics including SNR and SSIM have been verified to be applicable to vibrotactile quality assessment.
To implement a closed-loop XR-aided teleoperation system, the wireless network is required to support mixed types of data traffic, which includes control and command (C&C) transmission, haptic information feedback transmission, and rendered 360∘superscript360360^{\circ}360 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT video feedback transmission [14]. As XR-aided teleoperation task relies on both parallel and consecutive communication links, how to guarantee the cooperation among these communication links to execute the task is of vital importance. Specifically, the parallel visual and haptic feedback transmissions should be aligned with each other when arriving at the manipulator, and consecutive C&C and feedback transmissions should be within the motion-to-photon delay constraint, which is defined as the delay between the movement of the user’s head and the change of the VR device’s display reflecting the user’s movement. Either violation of alignment in parallel links or latency constraint in consecutive links will lead to a break in presence (BIP) and cybersickness. Therefore, both parallel alignment and consecutive latency should be quantified into effectiveness-aware performance metrics to guarantee the success of XR-aided teleoperation. Moreover, due to the motion-to-photon delay, the control error between the expected trajectory and the actual trajectory will accumulate along with the time, which may lead to task failure. Hence, how to alleviate the accumulated error remains an important challenge that needs to be solved.
Haptic communication has been incorporated by industries to perform grasping and manipulation, where the robot transmits the haptic data to the manipulator. The shape and weight of the objects to be held are measured using cutaneous feedback derived from the fingertip contact pressure and kinesthetic feedback of finger positions, which should be transmitted within stringent latency requirements to guarantee industrial operation safety.
In the scenario of a swarm of (autonomous) robots where they need to perform a collaborative task (or a set of tasks) within a deadline over a wireless network, an effective communication protocol that takes into account the peculiarities of such a scenario is needed. Considering the simple case of two robots, let’s say Robot A and Robot B that are communicating through a wireless network and they are not collocated. Robot A controls remotely Robot B such that to execute a task and the outcome of that operation will be fed to Robot A for performing a second operation to send the outcome back to Robot B. All this must happen within a strict deadline. The amount of information that is generated, transmitted, processed, and sent back can be very large with the traditional information agnostic approach. On the other hand, if we take into account the semantics of information and the purpose of communication, we change the whole information chain, from its generation point until its utilization. Therefore, defining goal-oriented semantic metrics for the control loop and communication between a swarm of (autonomous) robots is crucial and it will significantly reduce the amount of information leading to a more efficient operation.
C
The second test case is the 33-bus system case33bw, which has multiple branches. In this example, we demonstrate the efficacy of our approach in handling a system with complex components through the implementation of volt-VAR control, which represents smarter inverter behavior (whose characteristics are described in osti2016 ). To incorporate the behavior of volt-VAR control, we enhance the power flow solver used to compute the CLAs by integrating an additional fixed-point iterative method. Table 4.1 shows the computation times for the bilinear and the two MILP formulations. We exclude the computation time for the KKT formulation since the solver fails to find even a feasible (but potentially suboptimal) point within 55000 seconds (15 hours). Our final test case is the 141-bus system case141. Similar to the 33-bus system, the solver could not find the optimal solution for the KKT formulation within a time limit of 15 hours. It is evident the KKT formulation is intractable. Table 4.1 again shows the results for this test case, and Figs. 0b and 0c compare the computation times for the bilinear and MILP formulations.
The first test case is the 10-bus system case10ba, a simple single-branch network. We consider a variant where the nominal loads are 60%percent6060\%60 % of the values in the Matpower file. The results from each formulation place a sensor at the end of the branch (furthest bus from the substation) with an alarm threshold of 0.90.90.90.9 per unit (at the voltage limit). Fig. 0a compares computation times from the three formulations. The the KKT formulation takes 26.7 seconds while the bilinear and MILP formulations take 1.96 and 1.54 seconds, respectively. Since the sensor threshold for the KKT and MILP formulations is at the voltage limit, AGD is not needed. Conversely, the bilinear formulation gives a higher alarm threshold. As a result, the AGD method is applied as a post-processing step to achieve the lowest possible threshold without introducing false alarms. The number of false positives reduces from 5.48%percent5.485.48\%5.48 % to 0%percent00\%0 %. Executing the AGD method takes 0.11 seconds.
The second test case is the 33-bus system case33bw, which has multiple branches. In this example, we demonstrate the efficacy of our approach in handling a system with complex components through the implementation of volt-VAR control, which represents smarter inverter behavior (whose characteristics are described in osti2016 ). To incorporate the behavior of volt-VAR control, we enhance the power flow solver used to compute the CLAs by integrating an additional fixed-point iterative method. Table 4.1 shows the computation times for the bilinear and the two MILP formulations. We exclude the computation time for the KKT formulation since the solver fails to find even a feasible (but potentially suboptimal) point within 55000 seconds (15 hours). Our final test case is the 141-bus system case141. Similar to the 33-bus system, the solver could not find the optimal solution for the KKT formulation within a time limit of 15 hours. It is evident the KKT formulation is intractable. Table 4.1 again shows the results for this test case, and Figs. 0b and 0c compare the computation times for the bilinear and MILP formulations.
To address challenges associated with power flow nonlinearities, we employ a linear approximation of the power flow equations that is adaptive (i.e., tailored to a specific system and a range of load variability) and conservative (i.e., intend to over- or under-estimate a quantity of interest to avoid constraint violations). These linear approximations are called conservative linear approximations (CLAs) and were first proposed in BUASON2022 . As a sample-based approach, the CLAs are computed using the solution to a constrained regression problem across all samples within the range of power injection variability. They linearly relate the voltage magnitudes at a particular bus to the power injections at all PQ buses. These linear approximations can also effectively incorporate the characteristics of more complex components (e.g., tap-changing transformers, smart inverters, etc.), only requiring the ability to apply a power flow solver to the system. Additionally, in the context of long-term planning, the CLAs can be readily computed with knowledge of expected DER locations and their potential power injection ranges. The accuracy and conservativeness of our proposed method is based on the information of the location of DERs and their power injections variability. As inputs, our method uses the net load profiles including the size of PVs when computing the CLAs. In practice, this data can be obtained by leveraging the extensive existing research on load modeling and monitoring to identify the locations and capabilities of behind-the-meter devices (refer to, e.g., Grijalva2021 ; Schirmer2023 ).
Table 4.1 shows both the computation times and the results of randomly drawing sampled power injections within the specified range of variability, computing the associated voltages by solving the power flow equations, and finding the number of false positive alarms (i.e., the voltage at a bus with a sensor is outside the sensor’s threshold but there are no voltage violations in the system). The results for the 33-bus and 141-bus test cases given in Table 4.1 illustrate the performance of the proposed reformulations. Whereas the KKT formulation is computationally intractable, our proposed reformulations find solutions within approximately one minute, where the MILP formulation with the BVR method typically exhibits the fastest performance. The solutions to the reformulated problems place a small number of sensors (two to four sensors in systems with an order of magnitude or more buses). No solutions suffer from false negatives since all samples where there is a voltage violation trigger an alarm. There are a number of false alarms prior to applying the AGD that after its application decrease dramatically to a small fraction of the total number of samples (1.34%percent1.341.34\%1.34 % and 0.01%percent0.010.01\%0.01 % in the 33-bus and the 141-bus systems, respectively). These observations suggest that our sensor placement formulations provide a computationally efficient method for identifying a small number of sensor locations and associated alarm thresholds that reliably identify voltage constraint violations with no false negatives (missed alarms) and few false positives (spurious alarms).
D
We have employed an advanced classification-based DOA estimation algorithm that is free of quantization errors. The backbone network is CNN, where a mask layer is used to enhance the robustness of the DOA estimation. Furthermore, to improve the accuracy of the DOA estimation of the CNN-based classification model, we incorporate a quantization-error-free soft label encoding and decoding strategy.
Consider a room with an ad-hoc microphone array of N𝑁Nitalic_N nodes and B𝐵Bitalic_B speakers, where each node comprises a conventional array of M𝑀Mitalic_M microphones.
We recorded a real-world dataset named Libri-adhoc-node10. It contains a conference room and an office room. Each room has 10 ad-hoc nodes and a loudspeaker. Each node contains a 4-channel linear array with an aperture of 8cm. Fig. 4 shows the recording environment of the two rooms. The size of the office room is approximately 9.8×10.3×4.29.810.34.29.8\times 10.3\times 4.29.8 × 10.3 × 4.2m with T≈601.39{}_{60}\approx 1.39start_FLOATSUBSCRIPT 60 end_FLOATSUBSCRIPT ≈ 1.39s. The size of the conference room is approximately 4.26×5.16×3.164.265.163.164.26\times 5.16\times 3.164.26 × 5.16 × 3.16m with T≈601.06{}_{60}\approx 1.06start_FLOATSUBSCRIPT 60 end_FLOATSUBSCRIPT ≈ 1.06s. It records the ‘test-clean’ subset of the LibriSpeech data replayed by the loudspeaker in the rooms, which contains 20 male speakers and 20 female speakers. The ad-hoc nodes and the loudspeaker have the same height of 1.31.31.31.3m. The ambient noise of the recording environments can be ignored. The detailed description of the data and its open source, which includes the speaker ID and positions, microphone node positions, self-rotation angles, etc, will be released in https://github.com/Liu-sp/Libri-adhoc-nodes10.
We have recorded a real-world dataset named Libri-adhoc-nodes10. The Libri-adhoc-nodes10 dataset is a 432-hour collection of replayed speech of the “test-clean” subset of the Librispeech corpus [32], where an ad-hoc microphone array with 10 nodes were placed in an office and a conference room respectively. Each node is a linear array of four microphones. For each room, 4 array configurations with 10 distinct speaker positions per configuration were designed.
For the test sets, we need to generate simulated data for ad-hoc microphone arrays, whose ad-hoc nodes are either circular arrays or linear arrays. Specifically, for each randomly generated room, we repeated the procedure of constructing the training data, except that (i) we randomly placed 10 ad-hoc nodes in the room and (ii) we placed B𝐵Bitalic_B speakers in the room with B={1,2}𝐵12B=\{1,2\}italic_B = { 1 , 2 }. We added diffuse noise with an SNR level randomly selected from [10,20,30]102030[10,20,30][ 10 , 20 , 30 ] dB. The SNR was calculated as an energy ratio of the average direct sound of all microphone channels to the diffuse noise. Note that, due to the potential large difference in distances between the nodes and speakers, the SNR at the nodes could vary in a wide range. Each test set consists of 1,200 utterances. To study the effects of different types of microphone arrays on performance, for each randomly generated test room, we applied exactly the same environmental setting (including the speech source, room environment, speaker positions, microphone node positions and self-angles) to both circular-array-based ad-hoc nodes and linear-array-based ad-hoc nodes.
C
The even coding model also has the potential to adapt to binocular vision data by incorporating an additional input dimension of size two.
As a result, the question of whether these methods are principled or reflect crucial features of biological systems is often sidelined or deemed irrelevant.
Investigating whether the model can detect binocular disparity or even construct a 3D model of the world would be fascinating.
The even coding model also has the potential to adapt to binocular vision data by incorporating an additional input dimension of size two.
after the model has been trained the vast majority of the output values are either at 0 or 1, signifying that our model encoded the images using binary representation.
B
\hat{\uppi}(\mathbf{x})\quad\vspace{-0.65em}italic_u = roman_π ( italic_I , bold_x , italic_E ) = roman_π ( italic_S ( bold_x , italic_E ) , bold_x , italic_E ) ⟹ italic_u = over^ start_ARG roman_π end_ARG ( bold_x )
Specifically, given the set of undesirable states 𝒪𝒪\mathcal{O}caligraphic_O, the sensor mapping can be composed with the vision-based controller to obtain the closed-loop, state-feedback policy, π^^π\hat{\uppi}over^ start_ARG roman_π end_ARG for a given environment:
The complement of the BRAT thus represents the unsafe states for the robot under π^^π\hat{\uppi}over^ start_ARG roman_π end_ARG.
Given the policy π^^π\hat{\uppi}over^ start_ARG roman_π end_ARG, we compute the BRT 𝒱𝒱\mathcal{V}caligraphic_V by solving the HJB-VI in (7).
Finally, a model-based spline planner P𝑃Pitalic_P takes in the predicted waypoint to produce a smooth control profile for the robot. Hence, the closed-loop policy π^^π\hat{\uppi}over^ start_ARG roman_π end_ARG is given by π^:=P∘C∘S⁢(𝐱,g,E)assign^π𝑃𝐶𝑆𝐱𝑔𝐸\hat{\uppi}:=P\circ C\circ S(\mathbf{x},g,E)over^ start_ARG roman_π end_ARG := italic_P ∘ italic_C ∘ italic_S ( bold_x , italic_g , italic_E ).
C
An upward pointing arrow leaving node (t,u)𝑡𝑢(t,u)( italic_t , italic_u ) represents y⁢(t,u)𝑦𝑡𝑢y(t,u)italic_y ( italic_t , italic_u ), the probability of outputting an actual label; and a rightward pointing arrow represents Ø⁢(t,u)italic-Ø𝑡𝑢\O(t,u)italic_Ø ( italic_t , italic_u ), the probability of outputting a blank at (t,u)𝑡𝑢(t,u)( italic_t , italic_u ).
In standard decoding algorithms for RNN-Ts, the emission of a blank symbol advances input by one frame.
introduces big blank symbols. Those big blank symbols could be thought of as blank symbols with explicitly defined durations – once emitted, the big blank advances the t𝑡titalic_t by more than one, e.g. two or three.
Note that when outputting an actual label, u𝑢uitalic_u would be incremented by one; and when a blank is emitted, t𝑡titalic_t is incremented by one.
With the multi-blank models, when a big blank with duration m𝑚mitalic_m is emitted, the decoding loop increments t𝑡titalic_t by exactly m𝑚mitalic_m.
C
The utterances of training, development and seen test set in the noisy LA dataset are generated based upon that of training, development and test set from the LA dataset, respectively. The utterances in these three sets are generated by using six scenes: Airport, Bus, Park, Public, Shopping, Station. The voices of unseen test set are simulated with four scenes: Metro, Pedestrian, Street, Tram.
The acoustic scenes are randomly sampled to mix with the bona fide and spoofed utterances at 6 different SNRs each: -5dB, 0dB, 5dB, 10dB, 15dB and 20dB.
The fake utterances are generated by mixing another randomly sampled acoustic scenes with the enhanced utterances each mixed with 6 different SNRfake -5dB, 0dB, 5dB, 10dB, 15dB and 20dB. Fake utterances are also generated by using an open-source toolkit Augly.
The real utterances of our training, development and test sets are generated based upon the bona fide ones of training, development and test sets from the LA dataset, respectively. They are generated by randomly adding acoustic scenes to clean utterances at 6 different SNRfake each -5dB, 0dB, 5dB, 10dB, 15dB and 20dB.
The statistics of real and fake utterances in our SceneFake dataset at different SNRs are reported in Tables 4 and  5, where #-5dB, #0dB, #5dB, #10dB, #15dB and #20dB denote the number of real or fake utterances at 6 different SNRs each -5dB, 0dB, 5dB, 10dB, 15dB and 20dB.
A
[4, 5]. The technique discussed in this paper, building upon the preliminary idea introduced in [1], uses a system realization that is based on the “information-state” as the state vector. An ARMA model which can represent the current output in terms of inputs and outputs from q𝑞qitalic_q steps in the past, is found by solving a linear regression problem relating the input and output data. Defining the state vector to be the past inputs and outputs, as the information-state, lets us realize a state-space model directly using the estimated time-varying ARMA parameters.
The pioneering work in system identification for LTI systems is the Ho-Kalman realization theory [6] of which the Eigensystem Realization Algorithm (ERA) algorithm is one of the most popular  [4]. Another system identification method, namely, q𝑞qitalic_q-Markov covariance equivalent realization, generates a stable LTI system model that matches the first “q𝑞qitalic_q” Markov parameters of the underlying system and also matches the equivalent steady-state covariance response/ parameters of the identified system [7, 8]. These algorithms assume stable systems so that the response can be modeled using a finite set of parameters relating the past inputs to the current output (moving-average (MA) model). For lightly damped and marginally stable systems, the length of history to be considered and the parameters to be estimated becomes very long, leading to numerical issues when solving for the parameters. To overcome this issue, the observer Kalman identification algorithm (OKID) [9] uses an ARMA model, rather than an MA model, consisting of past outputs and controls to model the current output. The time-varying counterparts of the ERA and OKID - TV-ERA and TV-OKID - were developed in [10] and [11], respectively. The identification of time varying linear systems (TV-ERA and TV-OKID) also builds on the earlier work on time-varying discrete time system identification [5, 12]. The OKID and TV-OKID explain the usage of an ARMA model to be equivalent to an observer in the loop system, and postulate that the identified observer is a deadbeat observer similar to the work in [13].
The results show that the information-state model can predict the responses accurately. The TV-OKID approach also can predict the response well in the oscillator experiment when the experiments have zero initial conditions, but it suffers from inaccuracy if the experiments have non-zero initial conditions as seen in Fig. 5b. In the case of fish and cart-pole, TV-OKID fails with the observer in the loop. We found that the identified open-loop Markov parameters predict the response well, but the prediction diverges from the truth when the observer is introduced, making the predictions useless. This observation further validates the hypothesis that the ARMA model cannot be explained by an observer in the loop system. Hence, we use only the estimated open-loop Markov parameters without the observer to show the performance of the TV-OKID prediction. The last q𝑞qitalic_q steps in OKID are ignored, as there is not sufficient data to calculate models for the last few steps, as discussed in Sec. 6.3. There is also the potential for numerical errors to creep in due to the additional steps taken in TV-OKID: determination of the time-varying Markov parameters from the time-varying observer Markov parameters, calculating the SVD of the resulting Hankel matrices and the calculation of the system matrices from these SVDs, as mentioned in [11]. On the other hand, the effort required to identify systems using the information-state approach is negligible compared to other techniques as the state-space model can be set up by just using the ARMA parameters. More examples can be found in [1], where the authors use the information-state model for optimal feedback control synthesis in complex nonlinear systems.
This paper describes a new system realization technique for the system identification of linear time-invariant as well as time-varying systems. The system identification method proceeds by modeling the current output of the system using an ARMA model comprising of the finite past outputs and inputs. A theory based on linear observability is developed to justify the usage of an ARMA model, which also provides the minimum number of inputs and outputs required from history for the model to fit the data exactly. The method uses the information-state, which simply comprises of the finite past inputs and outputs, to realize a state-space model directly from the ARMA parameters. This is shown to be universal for both linear time-invariant and time-varying systems that satisfy the observability assumption. Further, we show that feedback control based on the minimal information state is optimal for the underlying state space system, i.e., the information state is indeed a loss-less representation for the purpose of control. The method is tested on various systems in simulation, and the results show that the models are accurately identified.
The idea of using an ARMA model to describe the input-output data of an LTI system was first introduced in a series of papers related to the Observer/Kalman filter identification (OKID) algorithm [9, 18, 13], and the time-varying case was later considered in [11]. The credit for using an ARMA model for system identification goes to the authors of the papers mentioned above, however, the explanation for the ARMA parameters given in their work is not exact, and does not apply in general as we will show empirically. This section will summarize the OKID algorithm and discuss why the information-state approach is computationally much simpler and the theory discussed in Section 3 based on observability is the correct explanation for the ARMA parameters.
A
In many cases, the transmission process is the main bottleneck causing delays in edge inference, especially when the communication rate is low.
The extra feature extraction step in our method increases the complexity on the device side, but it effectively removes the task-irrelevant information and largely reduces the communication overhead.
While our method introduces additional complexity on the device side due to the complex feature extraction process, the proposed TOCOM-TEM method still enables low-latency inference.
In this paper, we develop a task-oriented communication framework for edge video analytics, which effectively extracts task-relevant features and reduces both the spatial and temporal redundancy in the feature domain.
Thus, it addresses the objective of reducing communication overhead by discarding task-irrelevant information.
A
The Connectome 1.0 human brain DW-MRI data used in this study is part of the MGH Connectome Diffusion Microstructure Dataset (CDMD)(Tian et al., 2022), which is publicly available on the figshare repository https://doi.org/10.6084/m9.figshare.c.5315474. MATLAB codes generated for simulation study, parameter fitting, and optimising b-value sampling is openly available at https://github.com/m-farquhar/SubdiffusionDKI.
The utility of diffusional kurtosis imaging for inferring information on tissue microstructure was described decades ago. Continued investigations in the DW-MRI field have led to studies clearly describing the importance of mean kurtosis mapping to clinical diagnosis, treatment planning and monitoring across a vast range of diseases and disorders. Our research on robust, fast, and accurate mapping of mean kurtosis using the sub-diffusion mathematical framework promises new opportunities for this field by providing a clinically useful, and routinely applicable mechanism for mapping mean kurtosis in the brain. Future studies may derive value from our suggestions and apply methods outside the brain for broader clinical utilisation.
The direct link between the sub-diffusion model parameter β𝛽\betaitalic_β and mean kurtosis is well established (Yang et al., 2022; Ingo et al., 2014, 2015). An important aspect to consider is whether mean β𝛽\betaitalic_β used to compute the mean kurtosis is alone sufficient for clinical decision making. While benefits of using kurtosis metrics over other DW-MRI data derived metrics in certain applications are clear, the adequacy of mean kurtosis over axial and radial kurtosis is less apparent. Most studies perform the mapping of mean kurtosis, probably because the DW-MRI data can be acquired in practically feasible times. Nonetheless, we can point to a few recent examples where the measurement of directional kurtosis has clear advantages. A study on mapping tumour response to radiotherapy treatment found axial kurtosis to provide the best sensitivity to treatment response (Goryawala et al., 2022). In a different study a correlation was found between glomerular filtration rate and axial kurtosis is assessing renal function and interstitial fibrosis (Li et al., 2022a). Uniplor depression subjects have been shown to have brain region specific increases in mean and radial kurtosis, while for bipolar depression subjects axial kurtosis decreased in specific brain regions and decreases in radial kurtosis were found in other regions (Maralakunte et al., 2022). This selection of studies highlight future opportunities for extending the methods to additionally map axial and radial kurtosis.
Instead of attempting to improve an existing model-based approach for kurtosis estimation, as has been considered by many others, we considered the problem from a different perspective. In view of the recent generalisation of the various models applicable to DW-MRI data (Yang et al., 2022), the sub-diffusion framework provides new, unexplored opportunities, for fast and robust kurtosis mapping. We report on our investigation into the utility of the sub-diffusion model for practically useful mapping of mean kurtosis.
For DKI to become a routine clinical tool, DW-MRI data acquisition needs to be fast and provides a robust estimation of kurtosis. The ideal protocol should have a minimum number of b-shells and diffusion encoding directions in each b-shell. The powder averaging over diffusion directions improves the signal-to-noise ratio of the DW-MRI data used for parameter estimation. Whilst this approach loses out on directionality of the kurtosis, it nonetheless provides a robust method of estimating mean kurtosis (Henriques et al., 2021), a metric of significant clinical value.
A
Variance of σ02superscriptsubscript𝜎02\sigma_{0}^{2}italic_σ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT and γ02superscriptsubscript𝛾02\gamma_{0}^{2}italic_γ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT
\mathbf{{G}}}^{\prime}_{j}}\right\|^{2}_{\mathrm{F}}start_OVERACCENT ( italic_a ) end_OVERACCENT start_ARG ≤ end_ARG ∥ bold_F start_POSTSUPERSCRIPT roman_H end_POSTSUPERSCRIPT bold_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT roman_F end_POSTSUBSCRIPT + ∥ bold_F start_POSTSUPERSCRIPT roman_H end_POSTSUPERSCRIPT bold_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT roman_F end_POSTSUBSCRIPT
\mathrm{T}}\in\mathbb{C}^{(M+1)\times 1}bold_italic_θ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ≜ [ bold_italic_θ start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT , italic_t ] start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT ∈ blackboard_C start_POSTSUPERSCRIPT ( italic_M + 1 ) × 1 end_POSTSUPERSCRIPT
Indoor region size (m3superscriptm3\mathrm{m}^{3}roman_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT)
\mathrm{F}}= ∥ bold_F start_POSTSUPERSCRIPT roman_H end_POSTSUPERSCRIPT roman_Δ bold_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT - bold_F start_POSTSUPERSCRIPT roman_H end_POSTSUPERSCRIPT roman_Δ bold_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT roman_F end_POSTSUBSCRIPT
C
By (2) and (3), the spatial temperature profiles are omitted and a coherent temperature profile between all nodes and edges is ensured, see also e.g. Krug et al. (2021).
We regard the internal energy of water as the main energy carrier and neglect other energy forms. Furthermore, as in Machado et al. (2022), we assume a linear dependency between the internal energy and the temperature of water.
The power-to-heat (P2H) connection of the two layers is implemented by heat pumps that couple nodes from the electrical layer with edges from the thermal layer.
Typically, the dynamics of the electrical layer and the heat pumps are fast compared to the thermal layer,
The thermal edges, i.e., the simple pipes and heat exchanger are modeled as pipes transporting water as thermal energy carrier that exchanges heat flow with its environment, due to thermal losses, heat injection or extraction.
D
Step 4: Combine subproblems’ solutions to establish a valid upper bound for (29). Evaluate the bound performance by measuring the gap between lower and upper bounds.
Table 1 reports the optimality gap and the computation time of Step 3 after one iteration, which is the most time-consuming component in the proposed method. The results demonstrate the consistent performance of our approach across different settings. Using the multipliers obtained in Step 2 without further updates, we can achieve a tight upper bound with an optimality gap of approximately 3%, indicating that a near-optimal solution to (29) is attained. In contrast, the benchmark method cannot provide an accurate estimation of the unknown globally optimal solution. A major reason is that in the benchmark approach, the complementarity constraint has to be first linearized using the big-M method and then dualized to ensure decomposability. However, this process introduces strong linearity in the relaxed problem, which tends to produce extreme solutions that compromise the quality of the derived bound. As a result, the Lagrangian relaxation in the benchmark approach only yields a trivial upper bound with up to 80% optimality gap, providing little insight into the problem’s true complexity. Our proposed method, however, circumvents the need to dualize the complementarity constraint by employing appropriate relaxations based on the inherent characteristics of the problem. By doing so, we simplify the complex model into a more tractable form with favorable structures, while still capturing the essential features of the problem. Importantly, the complementarity constraint remains respected in the relaxed subproblem, allowing us to derive a significantly tighter upper bound compared to the benchmark approach. Besides, the average computation time to optimally solve each subproblem in the proposed method is less than 2 minutes. We emphasize that this solving time is satisfactorily short for an infrastructure planning problem that does not require real-time computation. In fact, the computation time is orders of magnitude shorter than the implementation time of a deployment plan (e.g., in the range of months or years), rendering it insignificant for the planning purpose. This comprehensive evaluation confirms the effectiveness and efficiency of our proposed approach.
We establish a tight upper bound for the joint deployment problem despite its nonconcavity. A decomposable problem is developed through proper model relaxations. By leveraging the favorable structures of the relaxed problem, we are able to obtain an accurate estimation of the globally optimal solution to the original problem, enabling us to verify the optimality of the solution obtained. We show that our approach provides a high-quality upper bound with an optimality gap of around 3%.
Step 5: Terminate the procedure if the optimality gap is satisfactorily tight. Otherwise, update the multipliers according to (43c) and go to Step 3.
where ζ𝜁\zetaitalic_ζ is the step size. Due to the model relaxation, the established upper bound is an overestimation of the globally optimal solution to the original problem (29). In other words, the bound is also a theoretical upper bound for the original problem, which allows us to quantify the optimality of its solution. We summarize the derivation procedure as follows:
C
2) Image quality indicator: As shown in Fig. 2 e, it demonstrates that DEviS can serve as an indicator for representing the quality of medical images. Uncertainty estimation is an intuitive and quantitative way to inform clinicians or researchers about the quality of medical images. DEviS guides image quality quantitatively through the distribution of uncertainty values and qualitatively through the degree of explicitness of the uncertainty map. Furthermore, our developed UAF module aids in the initial screening of low-quality and high-quality data. High-quality data can be directly employed in clinical practice, while low-quality data necessitates expert judgment before utilization.
6) FIVES dataset. In the second application, the Fundus Image Vessel Segmentation (FIVES) dataset is used for the quality indicator. In the FIVES dataset, each image was evaluated for four qualities: normal, lighting and color distortion, blurring, and low-contrast distortion. In this experiment, we define normal images as high-quality data and images under other conditions as low-quality data. During the experimental process, DEviS was initially trained on the FIVES dataset, which comprises 300 slices of high-quality images. Subsequently, the performance of DEviS was evaluated on a mixed dataset from FIVES, consisting of 300 slices comprising both high and low-quality images. This mixed dataset comprised 159 high-quality slices and 141 slices of low-quality images. Throughout both the training and testing stages, each case was consistently adjusted to dimensions of 565×584565584565\times 584565 × 584 voxels, ensuring uniformity across the dataset.
2) Image quality indicator: As shown in Fig. 2 e, it demonstrates that DEviS can serve as an indicator for representing the quality of medical images. Uncertainty estimation is an intuitive and quantitative way to inform clinicians or researchers about the quality of medical images. DEviS guides image quality quantitatively through the distribution of uncertainty values and qualitatively through the degree of explicitness of the uncertainty map. Furthermore, our developed UAF module aids in the initial screening of low-quality and high-quality data. High-quality data can be directly employed in clinical practice, while low-quality data necessitates expert judgment before utilization.
We conducted OOD experiments on the Johns Hopkins OCT dataset and Duke OCT dataset with Diabetic Macular Edema (DME). As shown in Fig. 6 a, we first observed a slight improvement in results for mixed ID and OOD data after using DEviS. Then, we found significant differences in the performance of the segmentation between the with or without UAF. Additionally, there were also marked differences in the distribution of uncertainty between the ID and OOD data, especially adding the UAF module as shown in Fig. 6 b. As depicted in Fig. 6 c (i), We then employed Uniform Manifold Approximation and Projection (UMAP) to visually assess the integration of our method. In the spatial clustering results of the base network framework, we observed overlapping of ID and OOD data batches. However, after integrating DEviS, we observed improved batch-specific separation of ID and OOD data, particularly for the ID data. Furthermore, the integration of UAF with DEviS effectively eliminated the OOD, resulting in a more pronounced batch effect. Additionally, we first presented the uncertainty estimation map corresponding to UMAP in the Fig. 6 c (ii). It is evident from the map that the boundary region between different batches exhibits significantly higher uncertainty. More intuitively, the segmentation results and uncertainty maps of ID and OOD data can refer to Fig. 8 a. These results combine to show that DEviS with UAF provides a solution for filtering out abnormal areas where lesions may be present in OOD data.
In what follows, we apply DEviS with UAF to indicate the quality of data for real-world applications. The FIVES datasets are used for quality assessment experiments. We initially classified samples into three categories based on their quality labels: high quality, high & low quality, and low quality. We observed distinct performance variations among these categories (Fig. 7 a (i)). To further demonstrate its ability to indicate image quality, we delved into a combination of high and low-quality data to filter out high-quality data. Before the application of UAF, we identified 159 high-quality and 141 low-quality data samples. Upon implementing UAF, the distribution shifted, resulting in 153 high-quality and 61 low-quality data samples. This transition led to a remarkable increase in the proportion of high-quality data from 53% to 71%. Notably, the task at hand posed a greater challenge in assessing data quality compared to the detection of OOD data, as all data sources originated from the same distribution. We also found a significant performance boost with UAF in Dice and ECE metrics. (Fig. 7 a (ii)). Additionally, we investigated the distribution of uncertainty to discern differences between different qualities data (Fig. 7 b (i)). Moreover, the uncertainty distribution of high and low mixed quality with UAF was closer to the low-quality data (Fig. 7 b (ii)). The spatial clustering results of mixed-quality images were visualized using UMAP in the Fig. 7 c. Prior to incorporating our algorithm, some batch-specific separation was observed, albeit with partially overlapping regions (Fig. 7 c (i) 1st and 4th columns). However, upon integrating DEviS with UAF, a slight batch effect was observed (Fig. 7 c (i) 2nd, 3rd, 5th and 6th columns). Additionally, the UMAP visualization with uncertainty map exhibited uncertainty warnings for partially overlapping points, with noticeably high uncertainties along the edges of prediction errors (Fig. 7 c (ii) (1, 2)). Moreover, the segmentation results and uncertainty map of low-quality and high-quality images exhibited in Fig. 8 b, providing a more intuitive representation of the quality disparity. These results demonstrate that DEviS with UAF can serve as an image quality indicator to fairly value personal data in healthcare and consumer markets. This would help to remove harmful data while identifying and collecting higher-value data for diagnostic support.
D
IF=IF++IF⁢pI_{F}=\hskip 2.0ptI_{F}+\!\!\!+I_{Fp}italic_I start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT = italic_I start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT + + italic_I start_POSTSUBSCRIPT italic_F italic_p end_POSTSUBSCRIPT
0.95∗superscript0.95\textbf{0.95}^{*}0.95 start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT |||| 3.7 %∗superscript3.7 %\textbf{3.7 \%}^{*}3.7 % start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT
0.98∗superscript0.98\textbf{0.98}^{*}0.98 start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT |||| 2.2 %∗superscript2.2 %\textbf{2.2 \%}^{*}2.2 % start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT
0.96∗superscript0.96\textbf{0.96}^{*}0.96 start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT |||| 3.2 %∗superscript3.2 %\textbf{3.2 \%}^{*}3.2 % start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT
,i_{Op}^{1},i_{Op}^{2},i_{Op}^{3}\}{ italic_i start_POSTSUBSCRIPT italic_S italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , italic_i start_POSTSUBSCRIPT italic_S italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , italic_i start_POSTSUBSCRIPT italic_S italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT , italic_i start_POSTSUBSCRIPT italic_S italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT , italic_i start_POSTSUBSCRIPT italic_M italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , italic_i start_POSTSUBSCRIPT italic_M italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , italic_i start_POSTSUBSCRIPT italic_M italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT , italic_i start_POSTSUBSCRIPT italic_O italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , italic_i start_POSTSUBSCRIPT italic_O italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , italic_i start_POSTSUBSCRIPT italic_O italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT }. The three IBIs in IC⁢psubscript𝐼𝐶𝑝I_{Cp}italic_I start_POSTSUBSCRIPT italic_C italic_p end_POSTSUBSCRIPT that minimize the absolute error are chosen as shown in Equation (4) and are concatenated into the final IBIs sequence, IFsubscript𝐼𝐹I_{F}italic_I start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT. After iterating q𝑞qitalic_q segments to obtain the complete IFsubscript𝐼𝐹I_{F}italic_I start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT, feasible optimized solutions are regarded as final estimated IBIs from motion-contaminated PPG.
C
To further validate the effectiveness and reliability of the system, we deployed the system at both the transmitting and receiving end and conducted real-channel image transmission using hardware. As shown in Fig. 9, YunSDR Y750111Introduction website : https://www.v3best.com/y750s devices were used at both the transmitting and receiving ends, with the bitstream employing OFDM modulation and a bpp parameter of 0.1. The measured SNR of the wireless channel was approximately 0dB. After completing the transmission, the average performance metrics of the received images are presented in Table V.
At a bpp value of 0.1 and an SNR of around 0, the image metrics obtained from the hardware experiment exhibit fluctuations around the results obtained from software simulation. In such low SNR scenarios, our STSCI still performs well both in terms of image metrics and visual effects. The average values are slightly lower but very close to the results from the software simulation.
These results demonstrate that STSCI is capable of performing well in real hardware deployment and transmitting over real channels. It also confirms the reliability of the software simulation results obtained earlier.
Meanwhile, Fig. 10 provides a visual example of hardware transmission along with its corresponding image metrics. According to Fig. 10, even at SNR around 0dB, the image metrics of the final image are still relatively high, without significant distortion or deformation. In contrast to the blurry and unclear version of the dial without enhancement, the enhanced version maintains clear visibility of the pointers and readings on the dial.
To further validate the effectiveness and reliability of the system, we deployed the system at both the transmitting and receiving end and conducted real-channel image transmission using hardware. As shown in Fig. 9, YunSDR Y750111Introduction website : https://www.v3best.com/y750s devices were used at both the transmitting and receiving ends, with the bitstream employing OFDM modulation and a bpp parameter of 0.1. The measured SNR of the wireless channel was approximately 0dB. After completing the transmission, the average performance metrics of the received images are presented in Table V.
A
Note that this statement holds under condition (30), which implies that the received power of the desired source is stronger than the received power of each interference source, considering the attenuation stemming from the activity duration.
The proofs of Proposition 1 and Proposition 2 rely on the following lemma, which is important in its own right.
Third, following the same techniques in the proof of Proposition 1 and Proposition 2, similar results are derived for an alternative definition of the SIR: SIRtot⁢(𝚪)≡𝒅0H⁢𝚪⁢𝒅0∑j=1NI𝒅jH⁢𝚪⁢𝒅jsubscriptSIRtot𝚪superscriptsubscript𝒅0𝐻𝚪subscript𝒅0superscriptsubscript𝑗1subscript𝑁Isuperscriptsubscript𝒅𝑗𝐻𝚪subscript𝒅𝑗\text{SIR}_{\text{tot}}(\mathbf{\Gamma})\equiv\frac{{\bm{d}}_{0}^{H}\mathbf{%
Since we established that the Riemannian approach is better than the Euclidean one in terms of the SIR in Proposition 1, Proposition 2 implies that increasing the SNR further increases the gap between the two approaches. Nevertheless, it also indicates that the performance of the Riemannian approach in terms of the SIR is more sensitive to noise compared to the Euclidean counterpart.
Similarly to Proposition 1, the following Proposition 3 examines the performance in terms of the SIR defined in (43). Here, assumptions 2-4 are not required, and therefore, the ATFs of the interference sources could be correlated, and the number of sources is not limited by the number of microphones in the array.
A
The remainder of this manuscript is organized as follows. Section II includes an overview of related works from the literature on breathing anomaly detection using various sensing technologies and machine learning. Section III describes various human breathing patterns from the literature to be used as breathing classes for anomaly detection. Section IV presents the system model, relevant theory and lock-in detection process used in this study. The details of hardware components, data collection and initial data processing are depicted in Section V. Next, Section VI describes the handcrafted features used and their extraction process. Data classification process using the chosen machine learning algorithms are included in Section VII and the results along with their interpretations are discussed in Section VIII. Finally, Section IX presents the conclusions drawn from the whole effort and forecasts future research directions.
Some past classification efforts involved one-class classification or outlier detection, as in [30] where the model was trained using human breathing data in resting condition to predict if the person was exercising in new examples. Binary classification between normal breathing and apnea were performed in [29] to detect obstructive sleep apnea. Multiclass breathing classification efforts considered different types of breathing anomalies like tachypnea, bradypnea, hyperpnea, hypopnea etc. and sometimes more complicated anomalies like Cheyne-Stoke’s, Biot’s and Apneustic breathing as separate classes [24, 32, 31]. Most of these breathing patterns are explained in Section III. Data for these efforts were usually obtained from human volunteers who are generally unable to breathe using precise frequency, amplitude and pattern. Occasionally, data from patients with breathing disorders were utilized, but this approach had its limitations as well. This is because even the patients may not consistently exhibit abnormal breathing patterns which increases the risk of mislabeling the training data. In the current study, more reliable data were generated by using a programmable robot with precise human-like breathing capability. Various machine learning techniques were employed in the literature to classify breathing data, including decision tree, random forest, support vector machine, XGBoost, K𝐾Kitalic_K-nearest neighbors, feedforward neural network, and logistic regression, among others. The performance of these models was assessed using different evaluation metrics such as confusion matrices, K𝐾Kitalic_K-fold cross-validation, accuracy, precision, sensitivity (recall), specificity, F1-score etc. [31, 29, 24, 11, 19].
Feature extraction is an important step in machine learning-based data classification. After detrending, four handcrafted features were extracted from the collected data using MATLAB code for the following three cases:
Researchers have been applying machine learning and deep learning techniques on human respiration data collected through various technologies for anomaly detection. Most of these efforts made use of handcrafted features to perform breathing data classification for anomaly detection. Some of the common categories of features used in the literature were statistical features from the data (mean, standard deviation, skewness, kurtosis, root mean-square value, range etc.), signal-processing based features (Fourier co-efficients, autoregressive integrated moving average co-efficients, wavelet decomposition, mel-frequency cepstral coefficients, linear predictive coding etc.), and respiration related features (breathing rate, amplitude, inspiratory time, expiratory time etc.) [28, 29, 30, 11, 31, 24]. In some research efforts, deep neural networks were trained to recognize subtle features from breathing data before classification, thus making manual feature extraction redundant [19, 32, 9, 17, 26].
The features for each data were saved in separate rows in CSV files along with the class label for each row. Thus, labeled features were prepared for the subsequent classification task. The details of extracted handcrafted features are provided as follows.
C
\mathsf{UE}}_{k}}}(\widehat{g}^{(i)}_{km})^{*}w^{\mathsf{u}}_{m}.over¯ start_ARG italic_x end_ARG start_POSTSUPERSCRIPT sansserif_u end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_m ∈ caligraphic_M start_POSTSUPERSCRIPT sansserif_UE end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( over^ start_ARG italic_g end_ARG start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k italic_m end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT italic_y start_POSTSUPERSCRIPT sansserif_u end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_m ∈ caligraphic_M start_POSTSUPERSCRIPT sansserif_UE end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ caligraphic_K end_POSTSUBSCRIPT ( over^ start_ARG italic_g end_ARG start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k italic_m end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT italic_g start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_m end_POSTSUBSCRIPT square-root start_ARG italic_ρ start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT end_ARG italic_x start_POSTSUPERSCRIPT sansserif_u end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT + ∑ start_POSTSUBSCRIPT italic_m ∈ caligraphic_M start_POSTSUPERSCRIPT sansserif_UE end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( over^ start_ARG italic_g end_ARG start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k italic_m end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT italic_w start_POSTSUPERSCRIPT sansserif_u end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT .
Based on (5) and the formulation in [18], the effective uplink signal to interference plus noise ratio (SINR) of user k𝑘kitalic_k is given by
For uplink transmission, each user k𝑘kitalic_k transmits a data signal xk𝗎subscriptsuperscript𝑥𝗎𝑘x^{\mathsf{u}}_{k}italic_x start_POSTSUPERSCRIPT sansserif_u end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT.
From Theorem 18, we conclude that learning based on our Markov games model is equivalent to performing the pilot update which minimizes the interference due to PC at each near-RT PA.
The received signal y¯k𝖽subscriptsuperscript¯𝑦𝖽𝑘\bar{y}^{\mathsf{d}}_{k}over¯ start_ARG italic_y end_ARG start_POSTSUPERSCRIPT sansserif_d end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT for user k𝑘kitalic_k is then given by
A
\mathsf{H}}\mathbbm{t}_{k}]\right)roman_ℜ ( italic_b start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( blackboard_T ) ) = roman_ℜ ( sansserif_E [ blackboard_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT sansserif_H end_POSTSUPERSCRIPT blackboard_t start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ] ) is linear, convexity of the reformulated SINR constraints readily follows. We omit the proof for the convexity of the objective and power constraints, since it is trivial. Finally, repeated applications of Cauchy-Schwarz inequality prove that all the aforementioned functions are also proper functions.
is readily given by combining Lemma 3, Lemma 4, Lemma 5, and by noticing that the unique solution 𝕋′superscript𝕋′\mathbbm{T}^{\prime}blackboard_T start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT to Problem (32) is also a solution to Problem (10) (note: the converse does not hold in general).
Let 𝛌⋆superscript𝛌⋆\bm{\lambda}^{\star}bold_italic_λ start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT be a solution to Problem (14). Then, a solution to Problem (10) is given by any solution to
Problem (32) admits a unique solution 𝕋′∈𝒯superscript𝕋′𝒯\mathbbm{T}^{\prime}\in\mathcal{T}blackboard_T start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ caligraphic_T. Furthermore, strong duality holds for Problem (32), i.e., Problem (33) and Problem (32) have the same optimum, and there exist Lagrangian multipliers (𝛌′,𝛍′)superscript𝛌′superscript𝛍′(\bm{\lambda}^{\prime},\bm{\mu}^{\prime})( bold_italic_λ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , bold_italic_μ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) solving Problem (33).
The next simple lemma can be used to relate Problem (10) to Problem (32), following a similar idea in [30, 23].
D
In this subsection, we first obtain an estimate (A^,B^)^𝐴^𝐵(\hat{A},\hat{B})( over^ start_ARG italic_A end_ARG , over^ start_ARG italic_B end_ARG ) offline from measured data of the unknown real system (2), and then synthesize a controller (2.1) with zero terminal matrix P=0𝑃0P=0italic_P = 0. This is the classical receding-horizon LQ controller [10].
There are many recent studies on linear system identification and its finite-sample error bounds [22, 23, 18].
In this work, the obtained bounds hold regardless of whether εmsubscript𝜀m\varepsilon_{\mathrm{m}}italic_ε start_POSTSUBSCRIPT roman_m end_POSTSUBSCRIPT and εpsubscript𝜀p\varepsilon_{\mathrm{p}}italic_ε start_POSTSUBSCRIPT roman_p end_POSTSUBSCRIPT are coupled. The presence of coupling, e.g., εp=h⁢(εm)subscript𝜀pℎsubscript𝜀m\varepsilon_{\mathrm{p}}=h(\varepsilon_{\mathrm{m}})italic_ε start_POSTSUBSCRIPT roman_p end_POSTSUBSCRIPT = italic_h ( italic_ε start_POSTSUBSCRIPT roman_m end_POSTSUBSCRIPT ) for some function hℎhitalic_h, can be easily incorporated by plugging function hℎhitalic_h into the bound g𝑔gitalic_g. In addition, the error bounds obtained will also depend on the system matrices A⋆subscript𝐴⋆A_{\star}italic_A start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT and B⋆subscript𝐵⋆B_{\star}italic_B start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT. To simplify the algebraic expressions of the bounds, we upper bound the system matrices as
In this work, the true model (A⋆,B⋆)subscript𝐴⋆subscript𝐵⋆(A_{\star},B_{\star})( italic_A start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT ) is unknown, and we only have access to an approximate model (A^,B^)^𝐴^𝐵(\hat{A},\hat{B})( over^ start_ARG italic_A end_ARG , over^ start_ARG italic_B end_ARG ) that differs from the true model with an error: ‖A^−A⋆‖⩽εmnorm^𝐴subscript𝐴⋆subscript𝜀m\|\hat{A}-A_{\star}\|\leqslant\varepsilon_{\mathrm{m}}∥ over^ start_ARG italic_A end_ARG - italic_A start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT ∥ ⩽ italic_ε start_POSTSUBSCRIPT roman_m end_POSTSUBSCRIPT and ‖B^−B⋆‖⩽εmnorm^𝐵subscript𝐵⋆subscript𝜀m\|\hat{B}-B_{\star}\|\leqslant\varepsilon_{\mathrm{m}}∥ over^ start_ARG italic_B end_ARG - italic_B start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT ∥ ⩽ italic_ε start_POSTSUBSCRIPT roman_m end_POSTSUBSCRIPT for some εm⩾0subscript𝜀m0\varepsilon_{\mathrm{m}}\geqslant 0italic_ε start_POSTSUBSCRIPT roman_m end_POSTSUBSCRIPT ⩾ 0. This approximate model and its error bound can be obtained, e.g., from recent advances in linear system identification [22, 23].
where the regret is linear in T𝑇Titalic_T. This observation matches the result in [34], where the regret of a linear unconstrained RHC controller, with a fixed prediction horizon and an exact system model, is linear in T𝑇Titalic_T. This linear regret is caused by the fact that even if the model is perfectly identified, the RHC controller still deviates from the optimal LQR controller due to its finite prediction horizon.
A
Remark. Since the policy (7) is conditioned on a partial observation 𝒐ksubscript𝒐𝑘{\bm{o}}_{k}bold_italic_o start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT of the state 𝒔ksubscript𝒔𝑘{\bm{s}}_{k}bold_italic_s start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT, the stationary MDP we have defined in this section is, in fact, a partially observable MDP (POMDP). In this case, it is known that the globally optimal policy depends on a summary of the history of past observations and actions, 𝒉k={𝒐1,𝒂1,…,𝒐k}subscript𝒉𝑘subscript𝒐1subscript𝒂1…subscript𝒐𝑘{\bm{h}}_{k}=\{{\bm{o}}_{1},{\bm{a}}_{1},\dots,{\bm{o}}_{k}\}bold_italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = { bold_italic_o start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , bold_italic_o start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT }, rather than just the current observation 𝒐ksubscript𝒐𝑘{\bm{o}}_{k}bold_italic_o start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT (Kaelbling et al., 1998). However, policies formulated based on an incomplete summary of 𝒉ksubscript𝒉𝑘{\bm{h}}_{k}bold_italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT are common in practice and still achieve good results (Sutton & Barto, 2018). We therefore pursue this approach in the present paper, and leave for future work testing the generalization of our policy input to a more complete summary of 𝒉ksubscript𝒉𝑘{\bm{h}}_{k}bold_italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT. We also note that policy gradient methods, which PPO belongs to, do not require the Markov property of the state (that is, conditional independence of future states on past states given the present state) and can therefore be readily applied to the POMDP setting. For our problem, this guarantees that the PPO algorithm will converge to a locally optimum policy.
In this paper, we have introduced the reinforcement learning reduced-order estimator (RL-ROE), a new state estimation methodology for parametric PDEs. Our approach relies on the construction of a computationally inexpensive reduced-order model (ROM) to approximate the dynamics of the system. The novelty of our contribution lies in the design, based on this ROM, of a reduced-order estimator (ROE) in which the filter correction term is given by a nonlinear stochastic policy trained offline through reinforcement learning. We introduce a trick to translate the time-dependent trajectory tracking problem in the offline training phase to an equivalent stationary MDP, enabling the use of off-the-shelf RL algorithms. We demonstrate using simulations of the Burgers and Navier-Stokes equations that in the limit of very few sensors, the trained RL-ROE vastly outperforms a Kalman filter designed using the same ROM, which is attributable to the nonlinearity of its policy (see Appendix I for a quantification of this nonlinearity). Finally, the RL-ROE also yields accurate high-dimensional state estimates for ground-truth trajectories corresponding to various parameter values without direct knowledge of the latter.
The RL-ROE exhibits robust performance across the entire parameter range μ∈[0,1]𝜇01\mu\in[0,1]italic_μ ∈ [ 0 , 1 ], including when estimating trajectories corresponding to previously unseen parameter values. Finally, Figure 4 (right) displays the average over time and over μ𝜇\muitalic_μ of the normalized L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error for varying number p𝑝pitalic_p of sensors. Note that each value of p𝑝pitalic_p corresponds to a separately trained RL-ROE. As the number of sensors increases, the KF-ROE performs better and better until its accuracy overtakes that of the RL-ROE. We hypothesize that the accuracy of the RL-ROE is limited by the inability of the RL training process to find an optimal policy, due to both the non-convexity of the optimization landscape as well as shortcomings inherent to current deep RL algorithms. This being said, the strength of the nonlinear policy of the RL-ROE becomes very clear in the very sparse sensing regime; its performance remains remarkably robust as the number of sensors reduces to 2 or even 1. In Appendix F, spatio-temporal contours (similar as in Figure 3) of the ground-truth solution and corresponding estimates for p=2𝑝2p=2italic_p = 2 and 12121212 illustrate that the slight advantage held by the KF-ROE for p=12𝑝12p=12italic_p = 12 is reversed into clear superiority of the RL-ROE for p=4𝑝4p=4italic_p = 4.
A big challenge is that ROMs provide a simplified and imperfect description of the dynamics, which negatively affects the performance of the state estimator. One potential solution is to improve the accuracy of the ROM through the inclusion of additional closure terms (Ahmed et al., 2021). In this paper, we leave the ROM untouched and instead propose a new design paradigm for the estimator itself, which we call a reinforcement-learning reduced-order estimator (RL-ROE). The RL-ROE is constructed from the ROM in an analogous way to a Kalman filter, with the crucial difference that the linear filter gain function, which takes in the current measurement data, is replaced by a nonlinear policy trained through reinforcement learning (RL). The flexibility of the nonlinear policy, parameterized by a neural network, enables the RL-ROE to compensate for errors of the ROM while still taking advantage of the imperfect knowledge of the dynamics. Indeed, we show that in the limit of sparse measurements, the trained RL-ROE outperforms a Kalman filter designed using the same ROM and displays robust estimation performance across different dynamical regimes. To our knowledge, the RL-ROE is the first application of RL to state estimation of parametric PDEs.
We evaluate the state estimation performance of the RL-ROE for systems governed by the Burgers equation and Navier-Stokes equations. For each system, we first compute various solution trajectories corresponding to different physical parameter values, which we use to construct the ROM and train the RL-ROE following the procedure outlined in Section 2.4. The trained RL-ROE is finally deployed online and compared against a time-dependent Kalman filter constructed from the same ROM, which we refer to as KF-ROE. The KF-ROE is given by equations (4a) and (5), with the calculation of the time-varying Kalman gain detailed in Appendix C of the supplementary materials.
D
The massive presence of networked systems in many areas is making distributed optimization more and more attractive for a wide range of tasks.
convergence of the network systems to a steady-state configuration corresponding to a stationary point of the problem.
These tasks often involve dynamical systems (e.g., teams of robots or electric grids) that need to be controlled while optimizing a cost index.
The massive presence of networked systems in many areas is making distributed optimization more and more attractive for a wide range of tasks.
In [19], algebraic systems are controlled by relying on gradient information affected by random errors. As for feedback optimization in multi-agent systems, the early reference [20] proposes an approach based on saddle point flows, while [21] addresses a partition-based scenario.
B
CNN has dominated the CV field over the past several years and is capable of capturing spatial hierarchies of the input features with successive convolution operations [5]. A typical CNN consists of several types of layers, including convolution layer, dropout layer, normalization layer, etc. Compared with CNN, the transformer is an emerging role in the CV field initially proposed in the natural language processing field. The core component of the transformer is the self-attention mechanism or attention in short. The attention mechanism in a transformer network creates interdependencies among different positions within a single sequence, enabling the model to compute a context-aware representation of each position in the sequence [6]. The primary distinction between the CNN- and the transformer-based methods is that CNN tends to emphasize local features, whereas the transformer is more oriented toward global features. As the visual input may contain lesions of varying scales, it is crucial to enhance the ability of the model to capture features at both local and global scales. To take advantage of both CNN and transformer, the combination of CNN and transformer has developed rapidly. This kind of integration aids in capturing both local and global features and reducing the neglect of potentially diseased areas. Specifically, varying medical datasets may contain lesions of varying sizes. By capturing and understanding features across a broader range of scales, the hybrid model improves its ability to recognize and interpret features in unseen images, which may present lesions at varying scales. This extensive scale coverage enables the model to adapt to unseen images better, thereby significantly contributing to the overall performance and generalization capability.
To further demonstrate the generalization ability of CECT, we contrast its performance with SOTA methods on unseen datasets. Specifically, the models undergo training and testing on distinct datasets. Given the notable performance of most methods on the COVID-19 radiography dataset, we perform an inter-dataset evaluation using the COVIDx CXR-3 dataset for training and the COVID-19 radiography dataset for testing. The performance comparison across inter-dataset evaluation and intra-dataset evaluation can be found in fig. 7. From the results, we can find that CECT outperforms SOTA methods to a large extent in the inter-dataset evaluation. It demonstrates superior performance among all metrics with an ACC of 90.9%, showing a merely 7.2% decrease compared with the intra-dataset evaluation. This is somewhat higher than the intra-dataset evaluation of several SOTA methods such as CrossViT and MobileViT. Considering NPV, SEN, and FOS, CECT achieves the highest results of 92.0%, 76.5%, and 81.5%, respectively. The highest PPV and SPE are achieved by DeiT, demonstrating even higher results compared with the intra-dataset evaluation. This is unreasonable and therefore deserves questioning and further investigation. Upon close examination, it can be found that though DeiT shows exemplary performances on PPV and SPE, it severely underperformed on others such as SEN. Such results can suggest the over-prediction of certain scenarios and do not indicate superior performance. Similar results are observed for the CSPNet and CoaT. The extraordinary performance under such a challenging task demonstrates the generalization ability of the CECT.
In this paper, we propose a novel CECT model by controllable ensemble CNN and transformer for COVID-19 classification. The CECT can extract features at both multi-local and global scales without sophisticated module design. Moreover, the contribution of local features at different scales can be arbitrarily controlled with the proposed ensemble coefficients. Extensive intra-dataset and inter-dataset experiments on two public COVID-19 datasets demonstrate that CECT surpasses existing methods, whether pure CNN- or transformer-based or their integration. The extraordinary performance and generalization ability demonstrate the effectiveness of the proposed CECT. To reveal the effectiveness and importance of capturing both multi-local and global features, we perform extensive ablation experiments on the COVIDx CXR-3 dataset and the results show that features at each scale count. The efficacy of CECT underscores the notion that increasing the complexity of the network architecture is not invariably essential for enhancing performance. A streamlined yet effective architecture can not only achieve superior performance but also be spread and applied more easily. While CECT exhibits outstanding performance in comparison to SOTA methods, it is pertinent to highlight that its parameter count can be relatively large. This increased computational demand stems from the integration of various blocks and branches, potentially making it less suitable for mobile applications. The future perspectives of CECT can be two-fold. Firstly, given that the CECT is tailored for image classification, it can be valuable to adapt it for fine-grained tasks, notably image segmentation. Unlike classification, segmentation aims at capturing pixel-level features from the input and has a more strict requirement for capturing features at different scales due to the various sizes across different objects. Under this scenario, the CECT-based architecture can further distinguish its strengths. Secondly, the selection of the optimal coefficient group could be made more intrinsic and adaptive, rather than being pre-determined. For instance, the coefficients can be integrated as adaptive hyperparameters, updated iteratively during the training process to optimize model performance. This approach could eliminate the need for separate coefficient searching and offer a greater variety of combinations.
To demonstrate the effectiveness and importance of capturing both multi-local and global features, we perform extensive ablation experiments on the COVIDx CXR-3 dataset as CECT outperforms SOTA methods to a large extent. The experiments are performed across different block configurations and feature capture scales and the results can be found in table 6. For the variants with PCE purely, we assess the performance of the three sub-encoders with designed classification heads. For the variant merely consisting of WAC, we train it with the prediction head. The ATD is not evaluated separately as it serves for decoding. In case all PCE, ATD, and WAC exist, we simulate the scenarios in which local features at varying scales are omitted. This can result in two variants, in which the one lacks local features at both 28 × 28 and 56 × 56 scales and the other lacks at the 28 × 28 scale only. It is clear that the variants employing PCE or WAC are designed to simulate cases utilizing either CNN- or transformer-based architecture. Conversely, variants including PCE, ATD, and WAC simulate scenarios where CNN- and transformer-based architectures are married. Upon examination, we note a substantial decrease in model performance when either the CNN- or transformer-based blocks are used exclusively. We observe the highest accuracy of 85.2% and 85.0%, for the variants utilizing CNN- and transformer-based architectures, respectively. When integrating both CNN- and transformer-based methods, it is observed that the overall performance deteriorates as more features are absent. When the model lacks local features at the 28 × 28 scale, the observed accuracy stands at 84.2%. With the absence of local features at both the 28 × 28 and 56 × 56 scales, the performance further declines to 74.7%. We present the t-SNE visualization across different groups of ensemble coefficients in fig. 8 to illustrate the results intuitively. From the results, it can be inferred that as the amount of features captured increases, the discriminative ability of the model improves noticeably. This can further underscore the effectiveness and importance of capturing multi-scale features from the input, thereby highlighting the novelty of our CECT approach.
Here, we develop a novel classification model CECT by Controllable Ensemble CNN and Transformer to improve the accuracy of COVID-19 diagnosis. The CECT is composed of a parallel convolutional encoder (PCE) block, an aggregate transposed-convolutional decoder (ATD) block, and a windowed attention classification (WAC) block. The PCE captures the features at multi-local scales. The ATD decodes the captured features to the identical scale and sums them using proposed ensemble coefficients. The summed features are fed into the WAC to capture the global features. Compared with existing approaches, CECT can extract features at both multi-local and global scales without complicated module design. Moreover, the contribution of local features at different scales can be controlled with the ensemble coefficients we proposed. Experimental results on the COVID-19 radiography dataset [7, 8, 9] and the COVIDx CXR-3 dataset [10, 11] show the leadership of CECT. The highest accuracy of 98.1% is achieved on the intra-dataset evaluation, outperforming state-of-the-art (SOTA) methods to a large extent. Moreover, CECT achieves a 90.9% accuracy on the inter-dataset evaluation, demonstrating its extraordinary generalization ability. To sum up, our main contributions are:
D
The results of our experiments are summarized in Table 1. The average ROC-AUC achieved by SCANet was 0.7732 ±plus-or-minus\pm± 0.039. This is a significant improvement over the previously published fully automatic deep learning model[Siddiqui et al.(2021)]. Our method also demonstrates higher and more robust performance metrics than the state-of-the-art model requiring manual clot segmentation [Hilbert et al.(2019)]. In addition to the literature benchmarks, SCANet performs better than a radiomics-based model and standard deep learning architecture when trained on the same cohort [Zhang et al.(2021a)].
The cohort used for this study comprises patients treated from 2012-2019. A patient was included in the cohort if they had CT and CTA imaging, underwent thrombectomy for stroke, and were assigned an mTICI score post-MTB. Of the 254 eligible patients, 69 patients were excluded due to missing either CT or CTA series, and 8 were excluded to due unclear stroke location, leaving 177 patients total. The dataset matched demographic distributions seen in other stroke studies, and the target labels were approximately balanced. Patient images were processed using a previously published pipeline adapted for CT, which included brain extraction and registration to a CT template in MNI space [Zhang et al.(2021b)].
Clinicians decide to perform MTB based on likelihood of successful recanalization, but it is unknown what factors underlie MTB responses. Clinical images such as CT and CTA contain valuable information to predict procedure outcome, and deep learning models have the capability to learn representations from highly dimensional imaging data. This study sought to predict final MTB recanalization in a fully automatic manner, leveraging recent advances in vision transformers to localize to the stroke region. We showed that our proposed model outperforms prior fully- and semi-automated machine and deep learning models. The primary limitation of our study is the small sample size, which precludes more robust validation. A few future directions include experimenting on a larger dataset across several institutions, optimizing the preprocessing pipeline to more effectively preserve high resolution CTA, and correlation of the immediate treatment response with long-term outcomes. These steps can produce a model that more accurately predicts MTB recanalization, in turn helping doctors and patients in the treatment decision process.
Stroke is the fifth leading cause of death and the leading cause of long-term disability; of the 795,000 new and recurrent strokes each year, acute ischemic stroke (AIS) accounts for 87% of cases [Tsao et al.(2022)]. Mechanical thrombectomy (MTB) is the leading treatment for patients with clots in large blood vessels. In this procedure, a blood clot is surgically removed from an artery to achieve recanalization, i.e., restored blood flow. As a standard measurement for recanalization achieved, a modified treatment in cerebral ischemia (mTICI) score [Tomsick(2007)] is assigned to patients post-treatment. This post-treatment score is clinically significant, as it has been shown that favorable scores, i.e., mTICI 2c or greater, are associated with better clinical outcomes in the long term [Ángel Chamorro et al.(2017)]. Unfavorable scores (mTICI less than 2c) indicate that the treatment did not effectively clear the blood vessel. Imaging has been identified as one modality to illustrate patient physiology that could influence the likelihood of a successful MTB procedure. Predicting final mTICI score prior to a procedure can provide doctors and with more more information when considering treatment options. Deep learning has been shown to leverage the amount of detail in images to improve prediction accuracy [LeCun et al.(2015)LeCun, Bengio, and Hinton]. Current literature presents models that perform semi-automated prediction of mTICI score based on pre-treatment CT imaging, with inconsistent performance [Hilbert et al.(2019), Siddiqui et al.(2021)]. We propose a fully automated model that uses both CT and CTA images to predict mTICI score post-treatment, incorporating attention modules into a deep learning network to effectively localize to informative stroke regions without requiring manual segmentation.
The results of our experiments are summarized in Table 1. The average ROC-AUC achieved by SCANet was 0.7732 ±plus-or-minus\pm± 0.039. This is a significant improvement over the previously published fully automatic deep learning model[Siddiqui et al.(2021)]. Our method also demonstrates higher and more robust performance metrics than the state-of-the-art model requiring manual clot segmentation [Hilbert et al.(2019)]. In addition to the literature benchmarks, SCANet performs better than a radiomics-based model and standard deep learning architecture when trained on the same cohort [Zhang et al.(2021a)].
B
2) Radio: The UE and LuMaMi are synchronized using a cable to avoid losing the signal between the receiver and the transmitter. The radio data is recorded on the testbed, thus its computer is connected to the NTP server to get the correct timestamp. To validate the timestamps, the Raspberry Pi is connected to an RF switch on the robot, which can turn the transmission of the UE on and off. Afterwards, we plot when the UE starts and stops transmissions and compare the results to the timestamps obtained by the NTP server. The timestamp mismatch results in a maximum ground truth error 0.50.50.50.5 mm, which is negligible.
The microphone on the robot is placed as close as possible to the speaker (sound source) and works as a reference to synchronize the speaker with the microphones. In addition to the 12121212 audio tracks, a synchronization pulse from the ground truth system on start and stop is recorded as a 13131313th track (“Sync”). To make calculations easier by viewing the sound source as a point source, only one side of the speaker is enabled (playing sound), and the head of the reference microphone is placed directly in front of the sound source. All microphones, except the one on the robot, have two markers placed, as seen on the left side of Fig. 4.
2) Radio: The UE and LuMaMi are synchronized using a cable to avoid losing the signal between the receiver and the transmitter. The radio data is recorded on the testbed, thus its computer is connected to the NTP server to get the correct timestamp. To validate the timestamps, the Raspberry Pi is connected to an RF switch on the robot, which can turn the transmission of the UE on and off. Afterwards, we plot when the UE starts and stops transmissions and compare the results to the timestamps obtained by the NTP server. The timestamp mismatch results in a maximum ground truth error 0.50.50.50.5 mm, which is negligible.
The NTP server runs on the Raspberry Pi and used to synchronize all computers regularly, except the computer used for sound recordings. One of the Raspberry Pi Input/Output pins, configured as input with interrupt function, is connected to the Qualisys Sync Unit, in order to listen to the short pulse (TTL signal) sent by the ground truth system when the recording starts. This TTL signal is called the “start signal” in Fig. 5 and is also sent to the sound system, connecting all the systems together. A more detailed description of the synchronization methods and the corresponding verification process are as follows.
3) Audio: Synchronization between microphones is done by connecting all microphones to a single sound card. In order to synchronize the common timestamp, a separate circuit is built to convert the TTL signal from the synchronization box to an audio signal which is passed to the sound card as a separate channel, as described above. For the audio system, the 13131313th “Sync” channel is the recording of the pulse, which gets triggered when the mocap system starts and stops recording. The synchronization is verified using a clapperboard. The markers on the clapperboard are tracked by the ground truth system and seen by the vision system. When its two sections are clapped shut, it creates a distinctive sound and simultaneously marks a visual cue.
D
The normalized spectrum is shown in Fig 4 (a) with λc=2.5subscript𝜆𝑐2.5\lambda_{c}=2.5italic_λ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT = 2.5 and Δ⁢λ/λc=80%Δ𝜆subscript𝜆𝑐percent80\Delta\lambda/\lambda_{c}=80\%roman_Δ italic_λ / italic_λ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT = 80 %.
The first step is referenced pattern zooming. Since most of the time, the reference is designed to be real and without fine structures, so it is easily reconstruced with conventional CDI algorithm such as HIO. In this step, we are able to measure slowly and precisely, meaning repeated measurement is possible to increase the dynamic range of the diffraction pattern and reduce noise. So we can have a precise, high quality reference known before the reconstruction of the sample.
The second step, both reference and sample are exposed to the beam. The pattern is recorded to reconstruct complex and fine structured sample. Using the information measured in the first step, by applying cross correlation constrain, fourier magnitude constrain and support constrain in sequence the image is reconstructed with high speed and high reconstruction rate. The algorithm is based on the following euqtions:
The oversampling of λ=1𝜆1\lambda=1italic_λ = 1 is 2. The same recipes as Fig. 3(b-d) is shown in Fig 4(b-d).
The reconstructed image with 1000 RAAR iterations and shrinking wrap each 20 iteration is shown in Fig 3(d).
C
A substantially different scenario from an astrodynamics perspective arises when servicers must navigate between orbital planes of the constellation by primarily utilizing their own propulsion system. Such maneuvers are commonly prohibitively expensive for chemical thrusters, thus requiring the use of low-thrust propulsion with high specific impulse.
The formulation is applied to a scenario for placing depots to service the GPS constellation, Galileo constellation, and the two constellations simultaneously.
As on-orbit servicing technology matures in the GEO market, the MEO and GSO markets would present a natural extension of such service and high-level design of such architecture may be conducted with the OFLP.
This is done by adopting the FLP to the on-orbit servicing depot of high-altitude satellite constellations; the proposed formulation is coined as the Orbital Facility Location Problem (OFLP).
This scenario is relevant when the constellation to be serviced is at high altitudes, such as Medium Earth Orbit (MEO) or high-inclination GSO.
D
Our results show that applying SpecAugment in the data slightly improves the performance (PER and WER) on the raw test dataset and the augmented test set. We also demonstrate that experiments with augmented test datasets have the best results when the model was trained on that augmented training dataset. For the PR task,  HuBERT-Gaussian-Noise (13.10%) and  wav2vec-Gaussian-Noise (70.67%) showed the lowest PER on test sets with Gaussian Noise. For the ASR task,  HuBERT-Speed-Perturbation (21.63%) and  wav2vec-Speed-Perturbation (34.22%) showed the lowest WER on test sets with speed perturbation.
Among multiple self-supervised pre-trained models available by the S3PRL toolkit, we selected two of the discriminative modeling - wav2vec [28] and HuBERT [11].
Phoneme Recognition (PR) and Automatic Speech Recognition (ASR) are two of the most common speech-processing tasks that can greatly benefit from applying these augmentation techniques. We used S3PRL, an open-source toolkit that targets self-supervised learning for speech processing  [38]. It supports easy benchmarking of different speech representation models. In this paper, we chose to experiment with two pre-trained models, wav2vec [28] and HuBERT [11] to see how different augmentation techniques impact the performance of both PR and ASR tasks.
Because of the temporal character of the spoken signal and the unique physical meaning of the spectrogram, the audio data has specific augmentation methods. These approaches, for example, are adding Gaussian Noise, temporal stretch, and pitch shift are three common augmentation methods for the raw audio stream. SpecAugment [23] is a popular feature augmentation approach for spectrograms. The two-dimensional spectrogram diagram is interpreted as an image in this manner, with time on the horizontal axis and frequency on the vertical. This feature is augmented by time warping, frequency masking, and time masking. Some of the first studies of noise-tolerant ASR using deep networks were RNNs on Aurora-2 and DNNs on Aurora-4, respectively [32, 29]. The first investigates the transfer performance of an RNN-based acoustic model trained purely on clean speech, while the second investigates alternative noise-aware training regimes for DNNs and which are most advantageous. The use of data augmentation on low-resource voice recognition tasks, in which models were assisted by generated data, is demonstrated in [12, 36, 26]. In [9], in order to strengthen the model and make it more robust, a noisy audio source was layered on top of clear audio. Raw audio was manipulated in terms of speed in order to perform LVSCR tasks in [13]. However, cutting-edge models such as HuBERT [11] and wav2vec [28] (only trimming which was not used to expand the dataset) did not examine their models’ performance with data augmentation. The available pre-trained models are trained with LibriSpeech, with no augmentation. In this work, we want to examine how finetuning with augmented data affects model robustness and generalization to real-world data (data with noise or change in speed).
S3PRL is an open-source toolkit with powerful functionality at all levels of model training, development, and deployment. The name is an acronym for  Self-Supervised Speech Pre-training and Representation Learning. In this project, we utilize this toolkit for all of our model training and evaluation.
D
On top of long propagation delay, non-terrestrial platforms also have limited bandwidth due to the scarcity of spectrum resources and ensure no additional interference to the licensed services.
As the generic RL frameworks depend on the feedback received from the environment, the additional overhead introduced by the network parameters results in undesirable network resource consumption.
One of the major motivating factors for implementing feedback-based learning, such as RL methods, in NTNs, is the inherent feedback system of the current cellular networks. CSI information is readily available for the BSs which can be helpful in network optimization approaches. However, with the emergence of NTNs, new challenges arise, necessitating the efficient design of feedback mechanisms to minimize the overall overhead while improving network performance. This consideration is crucial, as AI approaches for addressing various issues may require similar types of feedback. The utilization of combined feedback can prove highly beneficial in optimizing network performance and achieving efficient resource allocation, thus enhancing the overall effectiveness of AI algorithms in NTNs.
This additional communication overhead puts an additional burden on the limited spectrum of resources allocated for the non-terrestrial platforms.
However, the performance of these algorithms is highly dependent on the feedback received from the environment.
A
Consequently, the swing leg pairs extend outward during the flight phase, termed bounding with extended suspension (BE).
Similarly, the FG and FE branches ultimately converge with the pronking branch and merge at bifurcation point A.
In Fig. 3, these solutions are represented by the solid orange curve, which bifurcates from the PF branch at q˙x=4.4⁢[g⁢lo]subscript˙𝑞𝑥4.4delimited-[]𝑔subscript𝑙𝑜\dot{q}_{x}=4.4~{}[\sqrt{gl_{o}}]over˙ start_ARG italic_q end_ARG start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT = 4.4 [ square-root start_ARG italic_g italic_l start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT end_ARG ] (designated as black dot A).
As the solutions along the HG and HE branches approach point A, all four legs tend to synchronize, ultimately converging with the pronking branch at point A.
In Fig. 3, these solutions are represented by red dashed curves, connected to the pronking branch at the same bifurcation point A.
D
Recent literature states that convolutional neural networks (CNNs) have shown remarkable performance in many computer vision tasks, such as image classification [1, 2, 3], object detection [4, 5, 6, 7, 8], semantic segmentation [9, 10, 11, 12, 13] and clustering [14, 15, 16]. One of the most famous and successful CNNs is ResNet [1], which enables the construction of very deep networks to get better performance in accuracy. However, with more and more layers being used, the network becomes bloated. In this case, the huge number of parameters increases the computational load for the devices, especially for the edge devices with limited computational resources. The convolutional layer, which is the critical component in the CNNs, is our target not only to slim but also to improve the accuracy of the network.
In the convolutional layer, convolution kernels are spatial-agnostic and channel-specific. Because of the spatial-agnostic characteristic, a convolutional layer cannot adapt to different visual patterns with respect to different spatial locations. Therefore, in the convolutional layer, many redundant convolutional filters and parameters are required for feature extraction in location-related problems.
In [27], Harmonic convolutional networks based on DCT were proposed. Only forward DCT without inverse DCT computation is employed to obtain Harmonic blocks for feature extraction. In contrast, with spatial convolution with learned kernels, this study proposes feature learning by weighted combinations of responses of predefined filters. The latter extracts harmonics from lower-level features in a region, and it applies DCT on the outputs from the previous layer which are already encoded by DCT.
Because the Conv2D layer is spatial-agnostic, the convolutional kernels cannot utilize the information from the location of the pixels. Conversely, the transform-based perceptron layer is location-specific, which means each weight parameter gains the location information.
The Conv2D layer has spatial-agnostic and channel-specific characteristics. Because of the spatial-agnostic characteristics of a Conv2D layer, the network cannot adapt different visual patterns corresponding to different spatial locations. On the contrary, the transform-based perceptron layer is location-specific and channel-specific. The 2D transform is location-specific but channel-agnostic, as it is computed using the entire block as a weighted summation on the spatial feature map. The scaling layer is also location-specific but channel-agnostic, as different scaling parameters (filters) are applied on different entries, and weights are shared in different channels. That is why we also use PyTorch’s 1×1111\times 11 × 1 Conv2D to make the transform-based perceptron layer channel-specific.
A
To overcome the Gd⁢s⁢c⁢a⁢l⁢esubscript𝐺𝑑𝑠𝑐𝑎𝑙𝑒G_{dscale}italic_G start_POSTSUBSCRIPT italic_d italic_s italic_c italic_a italic_l italic_e end_POSTSUBSCRIPT high differences between each pair of domains, we hypothesized that training the MDE using self-supervision jointly on both the source and the target datasets could potentially result in ranking the depths of both domains on a common scale, thus achieving a de facto inter-domain depth ranking.
Over the years, various solutions were suggested to overcome the lack of target GT depth measurements for training MDEs to predict absolute depth from target images. Here we cover the main approaches, that are also presented by category in Figure 2. The first approach is implemented as zero-shot [44] (see Figure 2a), where a model is trained on source datasets, and used to infer depth on target images, in the hope of generalizing well on the new domain. A recent zero-shot model [28] successfully overcame the geometrical domain gap between the source and the target domain by training a transformer-based architecture on a variety of source datasets (containing more than 700,000 training images with GT) that were further augmented to support various focal lengths. In addition, the camera parameters were embedded to enable zero-shot capabilities on various target datasets. In our work, we show an alternative solution to close the geometrical domain gap that uses only few annotated source samples (validation/test splits, less than 3,000 images) with a significantly lighter model (x50 less parameters). In addition, since our solution also uses target domain images, it could be re-adjusted to the new domain.
First, we adjusted the FOV of the source domain data (train and test splits) to match the FOV of the target domain, as described in Section 3.2 and Figure 3A. Training images from both source and target domains were randomly split into batches of four and used to train networks ΦΦ\Phiroman_Φ and ΨΨ\Psiroman_Ψ in a self-supervised manner (see Section 3.1 and Figure3B).
Table 3: Gd⁢s⁢c⁢a⁢l⁢esubscript𝐺𝑑𝑠𝑐𝑎𝑙𝑒G_{dscale}italic_G start_POSTSUBSCRIPT italic_d italic_s italic_c italic_a italic_l italic_e end_POSTSUBSCRIPT values estimated on test splits of various datasets (first column). Second column indicates which dataset was used as source (S) or target (T). Third column: two MDEs were trained separately on the source and the target training datasets; Fourth column: two MDEs were trained separately on the source and the target training datasets, the source images were adjusted to the target FOV (S→FOVFOV→\xrightarrow{\text{FOV}}start_ARROW overFOV → end_ARROWT); Fifth column: a single MDE was trained on a mixture of training data from the target and the source domains after the source images were adjusted to the target FOV.
To enable training on two domains, the images from the source domain were adjusted to the FOV of the target domain, as described in Section 3.2.
D
Figure 4 reveals that MICVAE significantly outperforms the CrudeControl method, achieving lower RMSEs across a realistic range of missingness (between 4 and 70 PAFs). Our model (MICVAE) produces prosody closer to the ground-truth than the prosody resulting from manually setting the output PAFs to default values, meaning MICVAE uses the learned structure of prosody data to fill in the gaps between the control points. The fact that Crude Control performs better than MICVAE is to be expected when the missingness rate approaches 0, because we are manually switching all PAFs to the values extracted from the ground-truth utterance.
Efficiency (Objective Evaluation). We assess efficiency by how many control points are required for the model to produce outputs that align with the user’s intention. Alignment is measured by the Root Mean Squared Error (RMSE) between the generated and ground-truth PAFs. We feed the model with additional control points from our test set incrementally. In this “iterative refinement” process the PAF with the highest RMSE in the previous generation is provided in the next step. A lower RMSE achieved with fewer input PAFs directly translates to efficiency; it implies less user input is required to generate satisfactory prosodic features.
Figure 6 reveals that MICVAE with 4 input PAFs is significantly perceived as closer to the reference audio compared to the version without any input PAFs. This not only confirms the model’s faithfulness to user intentions but also underscores its efficiency, as it achieves these results with a minimum number of input PAFs.
Fig. 6: 4 control points are enough to bring our model’s output significantly closer to the reference utterance. These are ratings in an A/B/R test where the question is “Which of these is closer to the reference ground-truth control audio?”.
Efficiency (Subjective Evaluation). We also conduct an A/B/R listening test involving 20 native Latin American Spanish speakers. Participants choose which of two audio outputs (one with 4 input PAFs and one with 0) sounds closer to a reference ground-truth audio. We chose 4 and 0 to demonstrate the impact that just 4 control points can have on the prosody.
D
Along similar lines, many works have captured the effect of the angular coordinate in the calculation of interference power and correspondingly in performance analysis of cellular networks by adopting realistic antenna patterns. In [25], the authors considered the effect of beam misalignment utilizing a 3GPP-based antenna pattern in a stochastic geometry framework. In [26], the authors investigated the impact of directional antenna arrays on mmWave networks. Among other insights, the role of realistic antenna patterns in the interference power is demonstrated. In [27], a multi-cosine antenna pattern is proposed to approximate the actual antenna pattern of a uniform linear array (ULA) and the impact in the interference power is highlighted. In [28], the authors adopt an actual three dimensional antenna model and a uniform planar array, which is mounted on UAVs, to examine the impact of both azimuth and elevation angles on the interference power.
In realistic mmWave networks, beam misalignment is inevitable, and the direction of the UE’s maximum gain may not be necessarily fully aligned with the corresponding one of the serving BS [21]. More specifically, beam misalignment can occur between the transmitting and receiving beams after channel estimation during the 5G NR beam management-based UE’s association policy [22] due to the following reasons: 1) use of codebook-based beamforming at the UEs with limited number of beams, 2) imperfect channel estimation, which results in estimation errors in the angle-of-arrival (AoA) or angle-of-departure (AoD), 3) imperfections in the antenna arrays, which includes array perturbation and mutual coupling, 4) mobility of the transceivers, and 5) environmental vibrations such as from wind or moving vehicles. Indeed, by considering codebook-based beamforming at the UE, the receiver is agnostic to the conditions that provide maximum power and the UE will perform scanning within a distance-limited finite area to select the serving BS among a set of candidate serving BSs. Therefore, it becomes clear that the selection of the serving BS will strongly depend on its location and thus, one should account for both polar coordinates of the candidate serving BSs in the determination of the maximum receiver power. Therefore, misalignment needs to be carefully accounted for in the mathematical analyses if one is to capture realistic 5G NR beam management-based association procedures. While path-loss is just dependent on the frequency and Euclidean distance, misalignment error, which is now a function of the angular distance between the candidate serving BSs and the UE, should be explicitly considered in the measurement of the received power in the UE’s association policy. The authors in [23] and [10] model the misalignment error as a random variable following the truncated Gaussian distribution, whereas the authors in [24], derived an empirical PDF for the misaligned gain based on simulations. However, the consideration of codebook-based beamforming at the UE necessitates a more nuanced analysis. To the best of the authors’ knowledge, this work proposes for the first time a stochastic geometry framework to study the performance in a mmWave cellular network by adopting 5G NR beam management-based procedures and jointly considering the impact of both the Euclidean and angular distances of the BSs in the UE’s association policy. The use of the angular distance is critical for the accurate estimation of the receiving antenna gain using 3GPP antenna patterns and allows to depart from the ideal baseline scenario.
The dominant interferer approach has been widely used in the literature due to its usefulness when the exact analysis is too complicated or leads to unwieldy results. For instance, in [29]−--[32], the authors capture the effect of the dominant interferer while approximating the residual interference with a mean value. In this section, in order to understand the effect of the different potential definitions of the dominant interferer on the performance analysis, the coverage performance is investigated under the assumption of neglecting all but a single dominant interferer for all policies. Note that in order to define the interferer as dominant, the latter is restricted to 𝐛⁢(𝐨,RL)𝐛𝐨subscript𝑅𝐿\mathbf{b}(\mathbf{o},R_{L})bold_b ( bold_o , italic_R start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT ). Accordingly, a performance comparison between the dominant interferer approaches with the exact performance of Policy 1, is conducted. To this end, the noise power is assumed to be negligible as compared to the aggregate interference experienced at the receiver, i.e., interference-limited scenarios 444Please note, that since interference from other BSs is ignored, it results in stochastic dominance of the SIR as compared to the exact SIR of the respective policy, which implies that the dominant interferer approach yields a bound on the exact coverage probability of Policy 1, Policy 2 and Policy 3. are considered and coverage probability, i.e., 𝒫c⁢(γ)≜1−FSIR⁢(γ)≜subscript𝒫𝑐𝛾1subscript𝐹SIR𝛾\mathcal{P}_{c}(\gamma)\triangleq 1-F_{{\rm SIR}}(\gamma)caligraphic_P start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( italic_γ ) ≜ 1 - italic_F start_POSTSUBSCRIPT roman_SIR end_POSTSUBSCRIPT ( italic_γ ), analysis is conducted in terms of the achieved SIR.
On another front, the idea of the dominant interferer has been introduced in the literature to facilitate the calculation of the SINR and provide a realistic and mathematically tractable approximation of the accurate aggregate interference [29]−--[32]. The notion of angular distances and their implication on the identification of the dominant interferer has recently been highlighted in [33] and [34]. Accordingly, a dominant interfering BS may not necessarily be the closest one to the receiver. Indeed, a far interferer may cause severer interference than a closer one, due to the fact that the AoA at the receiver may fall within the 3dB beamwidth of the antenna beam.
Fig. 9 shows the coverage probability versus γt⁢hsubscript𝛾𝑡ℎ\gamma_{th}italic_γ start_POSTSUBSCRIPT italic_t italic_h end_POSTSUBSCRIPT under Policy 1, Policy 2 and Policy 3 for the dominant interferer. Moreover, the coverage performance under Policy 1 and aggregate interference is depicted, when the receiver’s antenna is equipped with different number of sectors. Quite interestingly, it is observed that the coverage performance under Policy 2 with a single dominant interferer approximates the performance under Policy 1 with aggregate interference, especially when the number of sectors is small. In this case, the angular distance-based criterion results in realistic network performance. On the other hand, the performance under Policy 3 in the presence of a single dominant interferer clearly overestimates the corresponding one under Policy 1, which leads to the following system-level outcome: In a LOS ball of a mmWave network under beam misalignment error at the receiver, by attaching to the closest BS in angular distance and considering the dominant interferer as the closest BS in angular distance w.r.t. the line of communication link, results in a more accurate approximation of the coverage performance compared to the policy of attaching to the closest LOS BS and considering the dominant interferer as the second nearest BS. Notably, the performance of Policy 2 under dominant interferer approach yields a better approximation of the network’s coverage than the corresponding performance under Policy 1 under dominant interferer approach.
C
Fig. 3a-d shows the time-averaged RMSE and RCRLB for state estimation for forward and inverse EKF, UKF, CKF, and QKF, including the inverse filters with mismatched forward filters. From Fig. 3a, we observe that the forward EKF has the lowest error while forward QKF performs worse than all other forward filters. Although with correct forward filter assumption, IUKF-U has higher error than I-EKF (Fig. 3b), IUKF-E outperforms I-EKF even with incorrect forward filter assumption. Interestingly, forward UKF and CKF have similar accuracy, but IUKF-C has significantly lower errors than IUKF-U.
Fig. 2a shows the time-averaged RMSE in velocity estimation and its RCRLB (also, time-averaged) for a system that employs forward and inverse CKF, hereafter labeled ICKF-C system. We define the ICKF-U system as the one wherein the defender employs I-CKF, assuming a forward CKF when the attacker’s true forward filter is UKF. The other notations in Fig. 2 and also, in further experiments, are similarly defined. The RCRLB is computed as [𝐉−1]2,2+[𝐉−1]4,4subscriptdelimited-[]superscript𝐉122subscriptdelimited-[]superscript𝐉144\sqrt{[\mathbf{J}^{-1}]_{2,2}+[\mathbf{J}^{-1}]_{4,4}}square-root start_ARG [ bold_J start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ] start_POSTSUBSCRIPT 2 , 2 end_POSTSUBSCRIPT + [ bold_J start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ] start_POSTSUBSCRIPT 4 , 4 end_POSTSUBSCRIPT end_ARG, where 𝐉𝐉\mathbf{J}bold_J is the corresponding information matrix. We observe that forward CKF and UKF have similar estimation accuracy. Hence, both ICKF-C and ICKF-U yield similar estimation errors regardless of the actual forward filter. Although the forward and inverse filters have similar RCRLBs, the difference between the estimation error and RCRLB for I-CKF is less than that for the forward CKF. Hence, I-CKF outperforms forward CKF in terms of achieving the lower bound. Note that the forward and inverse filters are compared only to highlight their relative accuracy. For the considered system, I-UKF’s error and RCRLB are similar to that of I-CKF.
Fig. 3a-d shows the time-averaged RMSE and RCRLB for state estimation for forward and inverse EKF, UKF, CKF, and QKF, including the inverse filters with mismatched forward filters. From Fig. 3a, we observe that the forward EKF has the lowest error while forward QKF performs worse than all other forward filters. Although with correct forward filter assumption, IUKF-U has higher error than I-EKF (Fig. 3b), IUKF-E outperforms I-EKF even with incorrect forward filter assumption. Interestingly, forward UKF and CKF have similar accuracy, but IUKF-C has significantly lower errors than IUKF-U.
Fig. 2b-d shows the time-averaged RMSE and RCRLB for state estimation for forward and inverse UKF, QKF, and CQKF, including the mismatched inverse filter cases. The RCRLB for state estimation is Tr⁢(𝐉−1)Trsuperscript𝐉1\sqrt{\textrm{Tr}(\mathbf{J}^{-1})}square-root start_ARG Tr ( bold_J start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ) end_ARG, where 𝐉𝐉\mathbf{J}bold_J is the corresponding information matrix. For the Lorenz system, forward QKF and CQKF estimate the state more accurately than forward UKF. Regardless of the forward filter assumption, I-UKF has a similar performance as the corresponding forward filter. For instance, IUKF-U and forward UKF have similar estimation errors. On the other hand, from Fig. 2c, we observe that I-QKF outperforms the forward filters in all cases, i.e., IQKF-Q, IQKF-U and IQKF-CQ have lower errors than forward QKF, UKF and CQKF, respectively. Contrarily, in Fig. 2d, ICQKFs closely follow the corresponding forward filters’ errors. Interestingly, for the considered system, I-UKF’s and I-CQKF’s RCRLB is the same as that for the forward filters, which is slightly less than that for I-QKF. In spite of this, I-QKF has higher estimation accuracy than I-UKF, I-CQKF, and the forward filters, but with higher computational efforts.
Unlike I-UKF, in Fig. 3c, both ICKF-C, and I-EKF have similar performance with correct forward filter assumption. Contrarily, IQKF-Q in Fig. 3d performs slightly better than I-EKF, but with higher computational efforts. However, both ICKF-E and IQKF-E, respectively, in Fig. 3c and 3d, again outperform I-EKF. Since forward EKF estimates its state most accurately, we observe that IUKF-E, ICKF-E, and IQKF-E have the lowest estimation error even without true forward filter information.
D
1) M-DBD with structured unknown continuous-valued parameters. We exploit the sparsity of both radar and communications channels to formulate the recovery of unknown continuous-valued channel/signal parameters as a 3-D DBD problem. Following the approaches in [9, 27], we represent the unknown transmit radar signal (a periodic waveform) and communications messages in a low-dimensional subspace spanned by the columns of a known representation basis. This representation allows including the special structure of radar and communications signals in our M-DBD formulation.
4) Practical issues. We consider practical issues to generalize our approach. In the presence of noise, our formulation adds a regularization term to the dual problem. We further show that our method is robust to both gain and phase errors in the steering vector [30] and derive the optimality of the corresponding regularization parameters. In the non-blind scenarios, these errors have been tackled through techniques such as eigendecomposition of the measurement covariance matrix [31], Hadamard product-based estimation [32], and regularized ANM [33].
To formulate the SoMAN SDP, we resort to the following dual problem of (23) obtained from the Lagrangian of the primal objective:
The rest of the paper is organized as follows. In the next section, we describe the signal model for the multi-antenna overlaid JRC receiver. We devise the exact SoMAN SDP for 3-D DBD recovery in Section 3 along with the procedure to recover radar and communications waveform and theoretical recovery guarantees. The practical scenarios of noise and antenna errors are considered in Section 4, wherein we also provide optimal regularization parameters. The proof of our main result is detailed in Section  5. We validate our model and methods through extensive numerical experiments in Section 6 and conclude in Section 7.
2) 3-D SoMAN-based recovery. We formulate our problem as the minimization of the sum of two tri-variate atomic norms. However, the primal SoMAN problem does not directly yield a semidefinite program (SDP). We, therefore, turn to the dual problem and derive the SDP using the theories of positive hyperoctant trigonometric polynomials (PhTP) [28]. In the non-blind case, this approach has been previously employed for high-dimensional super-resolution (SR) [17] and bivariate radar parameter estimation [29]. We demonstrate our approach through extensive numerical experiments.
D
-\lambda/6\end{pmatrix}\right\|_{\infty}<\lambda.∥ italic_A start_POSTSUBSCRIPT italic_I start_POSTSUPERSCRIPT italic_C end_POSTSUPERSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ( divide start_ARG italic_b end_ARG start_ARG ∥ italic_b ∥ end_ARG + over¯ start_ARG italic_z end_ARG ) ∥ start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT = ∥ ( start_ARG start_ROW start_CELL italic_λ / 2 end_CELL end_ROW start_ROW start_CELL italic_λ end_CELL end_ROW start_ROW start_CELL italic_λ end_CELL end_ROW end_ARG ) + ( start_ARG start_ROW start_CELL italic_λ / 3 end_CELL end_ROW start_ROW start_CELL - italic_λ / 6 end_CELL end_ROW start_ROW start_CELL - italic_λ / 6 end_CELL end_ROW end_ARG ) ∥ start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT < italic_λ .
ker⁡AI={0}⁢ and ⁢b∉rge⁢AI.kernelsubscript𝐴𝐼0italic- and 𝑏rgesubscript𝐴𝐼\displaystyle\ker A_{I}=\{0\}\and b\notin\mathrm{rge}\,A_{I}.roman_ker italic_A start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT = { 0 } italic_and italic_b ∉ roman_rge italic_A start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT .
xj♯:={m+Wj⁢m,j∈[s],0j∈[n]∖[s]⁢ and ⁢b:=A⁢x♯+γ⁢w,assignsubscriptsuperscript𝑥♯𝑗cases𝑚subscript𝑊𝑗𝑚𝑗delimited-[]𝑠0𝑗delimited-[]𝑛delimited-[]𝑠italic- and 𝑏assign𝐴superscript𝑥♯𝛾𝑤\displaystyle x^{\sharp}_{j}:=\begin{cases}m+W_{j}\sqrt{m},&j\in[s],\\
A:=(10202−2)⁢ and ⁢b:=(11).assign𝐴matrix102022italic- and 𝑏assignmatrix11\displaystyle A:=\begin{pmatrix}1&0&2\\
A=(100011), and ⁢b=(12).formulae-sequence𝐴matrix100011italic- and 𝑏binomial12\displaystyle A=\begin{pmatrix}1&0&0\\
C
Due to the spatial overlaps between adjacent cubes, the representation fields can be well-covered by employing the proposed cube-based sampling in optimization.
As a result, the representation fields can be densely modeled with the same sampling time as NeRF [23].
Volume rendering. The pixel color 𝐂⁢(𝐫)𝐂𝐫\mathbf{C}(\mathbf{r})bold_C ( bold_r ) can be modeled as the integral of the corresponding ray 𝐫𝐫\mathbf{r}bold_r based on Beer-Lambert Laws as:
Due to the spatial overlaps between adjacent cubes, the representation fields can be well-covered by employing the proposed cube-based sampling in optimization.
After optimization, CuNeRF can predict the pixels at any spatial coordinates within the representation fields.
A
Another research direction aimed at the feasibility of the samples is to include barrier functions in the objective to penalize the proximity to the boundary of the feasible set [13, 21]. In this category, extremum seeking methods estimate the gradient of the new objective function by adding sinusoidal perturbations to the decision variables [22]. However, due to the perturbations, these methods have to adopt a sufficiently large penalty coefficient to ensure all the samples fall in the feasible region. This strategy sacrifices optimality since deriving a near-optimal solution requires a small penalty coefficient. In contrast, the LB-SGD algorithm proposed in [23] uses log-barrier functions and ensures the feasibility of the samples despite a small penalty coefficient. After calculating a descent direction for the cost function with log-barrier penalties, this method exploits the Lipshcitz and smoothness constants of the constraint functions to build local safe sets for selecting the step size of the descent. Although LB-SGD comes with a polynomial worst-case complexity in problem dimension, it might converge slowly, even for convex problems. The reason is that as the iterates approach the boundary of the feasible set, the log-barrier function and its derivative become very large, leading to very conservative local feasible sets and slow progress of the iterates.
To address an unmodeled constrained optimization, we develop a safe zeroth-order optimization method in this paper. Zeroth-order methods rely only on sampling (i.e., evaluating the unknown objective and constraint functions at a set of chosen points) [6]. Safety, herein referring to the feasibility of the samples, is essential in several real-world problems, e.g., medical applications [7] and racing car control [8]. Below, we review the pertinent literature on zeroth-order optimization, highlighting, specifically, safe zeroth-order methods.
Safe zeroth-order optimization has been an increasingly important topic in the learning-based control community. One application is constrained optimal controller tuning with unknown system dynamics. In reinforcement learning, Constrained Policy Optimization [24] and Learning Control Barrier Functions [25] (model-free) are used to find the optimal safe controller, but feasibility during training cannot be ensured. Bayesian Optimization can also be applied to optimal control in a zeroth-order manner. For example, [5] proposes Violation-Aware Bayesian Optimization to optimize three set points of a vapor compression system, [26] utilizes SafeOpt to tune a linear control law with two parameters for quadrotors, and [27] implements the Goal Oriented Safe Exploration algorithm in [18] to optimize a PID controller with three parameters for a rotational axis drive. Although these variants of Bayesian Optimization offer guarantees of sample feasibility, they scale poorly to high-dimensional systems due to the non-convexity of the subproblems and the need for numerous samples.
Applications ranging from power network operations [1], machine learning [2] and trajectory optimization [3] to optimal control [4, 5] require solving complex optimization problems where feasibility (i.e., the fulfillment of the hard constraints) is essential. However, in practice, we do not always have access to the expressions of the objective and constraint functions.
In this section, we present three numerical experiments to test the performance of Algorithm 1. The first is a two-dimensional problem where we compare SZO-QQ with other existing zeroth-order methods and discuss the impact of parameters. In the remaining two examples, we apply our method to solve optimal control and optimal power flow problems, which have more dimensions and constraints. All the numerical experiments have been executed on a PC with an Intel Core i9 processor. For solving (SP1) and (SP2) in Algorithm 1 (the latter can be reformulated as a QCQP), we use
B
Finally, we compare the running times required for the proposed Riemannian methods and the comparable iterative methods to converge in Table III. All the simulations are conducted by Matlab R2019b on a desktop with Intel(R) Core(TM) i9-10900K running at 3.70 GHz. We can see that the RCG method runs fastest in all the cases. In addition, RTR runs faster than the iterative methods for comparison in all the cases, especially in the PUPC case.
In this paper, we have investigated the linear precoder design methods with matrix manifold in massive MIMO DL transmission. We focus on the WSR-maximization precoder design and demonstrate that the precoders under TPC, PUPC and PAPC are on different Riemannian submanifolds. Then, the constrained problems are transformed into unconstrained ones on Riemannian submanifolds. Furthermore, RSD, RCG and RTR methods are proposed for optimizing on Riemannian submanifolds. There is no inverse of large dimensional matrix during the iterations in the proposed methods. Besides, the complexities of implementing these Riemannian design methods on different Riemannian submanifolds are investigated. Simulation results show the numerical superiority and computational efficiency of the RCG method.
With the Riemannian ingredients derived in Section III, we propose three precoder design methods using the RSD, RCG and RTR in this section. There is no inverse of the large dimensional matrix in the proposed Riemannian methods during the iterations, thereby enabling significant savings in computational resources. For the same power constraint, the computational complexities of the RSD or RCG method are nearly the same and are lower than those of the RTR and comparable methods.
In this paper, we focus on WSR-maximization precoder design for massive MIMO DL transmission and propose a matrix manifold framework applicable to TPC, PUPC and PAPC. We reveal the geometric properties of the precoders under different power constraints and prove that the precoder sets satisfying TPC, PUPC and PAPC form three different Riemannian submanifolds, respectively, transforming the constrained problems in Euclidean space into unconstrained ones on Riemannian submanifolds. To facilitate a better understanding, we analyze the precoder designs under TPC, PUPC and PAPC in detail. All the ingredients required during the optimizations on Riemannian submanifolds are derived for the three power constraints. Further, we present three Riemannian design methods using Riemannian steepest descent (RSD), Riemannian conjugate gradient (RCG) and Riemannian trust region (RTR), respectively. Without the need to invert the large dimensional matrix during the iterations, Riemannian methods can efficiently save computational costs, which is beneficial in practice. Complexity analysis shows that the method using RCG is computationally efficient. The numerical results confirm the advantages of the RCG method in convergence speed and WSR performance.
The remainder of the paper is organized as follows. In Section II, we introduce the preliminaries in the matrix manifold optimization. In Section III, we first formulate the WSR-maximization precoder design problem in Euclidean space. Then, we transform the constrained problems in Euclidean space under TPC, PUPC and PAPC to the unconstrained ones on Riemannian submanifolds and derive Riemannian ingredients in the matrix manifold framework. To solve the unconstrained problems on the Riemannian submanifolds, Section IV provides three Riemannian design methods and their complexity analyses. Section V presents numerical results and discusses the performance of the proposed precoder designs. The conclusion is drawn in Section VI.
A
Deep domain adaptation (DA) methods are being increasingly studied in medical image segmentation to reduce the domain shift effects [1, 10, 11]. In the context of cross-modal segmentation, we focus in particular on unsupervised domain adaptation (UDA) methods that do not rely on any prior knowledge of the labels of the target domain [12, 13, 14, 15, 16]. Typically, UDA methods for cross-modal segmentation involve two stages: unsupervised image-to-image (I2I) translation to learn intensity mappings between source and target domains, followed by supervised segmentation leveraging labels from the source domain [17, 18, 19, 20, 21, 22]. These two stages can also be combined into end-to-end models to benefit from label knowledge during I2I translation, at the cost of increased architectural complexity [17, 23, 24]. Although newer generative paradigms based on e.g. diffusion models are now emerging [25], most of existing I2I translation methods are based on generative adversarial networks (GANs) [1, 4, 26, 17] that promote realistic outputs through a competition between a generator and a discriminator [27, 28]. The most popular models for unsupervised I2I translation are CycleGAN and its variants [29, 30, 31]. However, GAN-based methods tend to learn global image-level mappings, potentially disregarding smaller regions of interest (ROIs) like tumors that may be underrepresented in the training set [32, 33]. Maintaining a balance in the distributions of features of interest in the training and target domains becomes crucial for accurately translating such structures, which is of paramount importance to train a downstream segmentation model on the target modality. Tuning this proportion without prior knowledge of the test set’s composition remains an open problem. Lastly, an often overlooked aspect is the high variability and difficulty in reproducing the outputs of CycleGAN models [34, 35]. It is common to retain the best performing model (as measured subjectively) from several trainings, which is not satisfactory due to the aleatoric nature of such practice.
Due to their generative capability, GANs can also be trained on medical images to generate samples tailored to a specific modality [48]. Related methodologies include tumor inpainting techniques from healthy subjects [49, 50, 51] or medical image style transfer [52]. Such GAN-based data augmentations have demonstrated a potential to reduce the number of required annotations compared to traditional data augmentation methods. However, these approaches typically rely on large training datasets to enable the generator and the discriminator to capture the diversity of the data distribution. Consequently, they lack robustness in scenarios with limited data availability.
The main objective of the present work was to study the potential interest of a new form of data augmentation, generative blending augmentation, to improve domain generalization for image segmentation. The underlying rationale behind GBA is to expose the network to a wide range of synthetic, realistically blended tumor contrasts to help tackle domain shifts during deployment. This approach is especially useful in scenarios where the distribution of tumor appearances during training may not cover the various deployment conditions due to factors such as sample selection bias or low data regimes.
Data augmentation allows to artificially increase the diversity of training examples without additional data [36].
We introduce a new data augmentation technique to diversity tumor appearance to which the network is exposed during training. To this end, we analyze the distribution of tumor appearances in the target domain to cover the distribution shift between centers by leveraging pseudo labels achieved through iterative self-training. Doing so, we make the segmentation model more robust to center-specific features and potential errors from the I2I translation stage.
C
1.75*10−8⁢Ω1.75superscript108Ω1.75*10^{-8}\,\Omega1.75 * 10 start_POSTSUPERSCRIPT - 8 end_POSTSUPERSCRIPT roman_Ω m
1.75*10−8⁢Ω1.75superscript108Ω1.75*10^{-8}\,\Omega1.75 * 10 start_POSTSUPERSCRIPT - 8 end_POSTSUPERSCRIPT roman_Ω m
Figure 8: Relative location accuracy for line section b) with underground cable and PD event located at a distance x=3/10⁢a𝑥310𝑎x=3/10\,aitalic_x = 3 / 10 italic_a after Step VI.
2.8*10−7⁢Ω2.8superscript107Ω2.8*10^{-7}\,\Omega2.8 * 10 start_POSTSUPERSCRIPT - 7 end_POSTSUPERSCRIPT roman_Ω m
1.68*10−8⁢Ω1.68superscript108Ω1.68*10^{-8}\,\Omega1.68 * 10 start_POSTSUPERSCRIPT - 8 end_POSTSUPERSCRIPT roman_Ω m
C
In this section, we apply our approach to two case studies and evaluate the results. We first consider a case study with an unbounded specification, followed by one with a bounded specification. All simulations have been performed on a computer with a 2.3 GHz Quad-Core Intel Core i5 processor and 16 GB 2133 MHz memory. For each case study, we compute the average computation time after performing 10101010 simulations and mention the observed standard deviation. The computation time includes all operations, so also includes acquiring data from the data-generating systems. Besides that, we compute the memory usage by considering all data stored in the MATLAB workspace.
In our considered case, both cars start close to each other with the same constant velocity. The goal of the controller is to make sure that the follower car achieves a safe distance to the leader car within a specific time frame and maintains this safe distance forever.
In optimization problem (3) we have two constraints. The first constraint on the system dynamics implies that the trajectory belongs to the behavior of the model. This can be equivalently written using the characterization of the system obtained through data, i.e., as in (6).
Here, the output ytsubscript𝑦𝑡y_{t}italic_y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT of the data-generating system equals the distance between the cars. We consider a bounded input ut∈𝕌=[−2,2]subscript𝑢𝑡𝕌22u_{t}\in\mathbb{U}=[-2,2]italic_u start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ∈ blackboard_U = [ - 2 , 2 ] that influences the velocity of the follower car and therefore, also the distance between the cars. Note that the analytic form of the data-generating system is only used to generate a data sequence 𝐰datasuperscript𝐰data\mathbf{w}^{\text{data}}bold_w start_POSTSUPERSCRIPT data end_POSTSUPERSCRIPT of length 31313131. We evaluate the results for two different cost functions, J1⁢(𝐮0,𝐲0)=‖𝐲[0,L]‖subscript𝐽1subscript𝐮0subscript𝐲0normsubscript𝐲0𝐿\smash{J_{1}(\mathbf{u}_{0},\mathbf{y}_{0})=\|\mathbf{y}_{[0,L]}\|}italic_J start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( bold_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , bold_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) = ∥ bold_y start_POSTSUBSCRIPT [ 0 , italic_L ] end_POSTSUBSCRIPT ∥ minimizes the distance between the cars, while J2⁢(𝐮0,𝐲0)=‖𝐮[0,L]‖subscript𝐽2subscript𝐮0subscript𝐲0normsubscript𝐮0𝐿\smash{J_{2}(\mathbf{u}_{0},\mathbf{y}_{0})=\|\mathbf{u}_{[0,L]}\|}italic_J start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( bold_u start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , bold_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) = ∥ bold_u start_POSTSUBSCRIPT [ 0 , italic_L ] end_POSTSUBSCRIPT ∥ minimizes the actuation of the car.
Inspired by Haesaert and Soudjani (2020), we consider a car platooning example with two cars, a leader and a follower. To design a data-driven controller that controls the distance between the cars, we get data from the data-generating system defined in the Appendix of .
D
Similarly, M3FM-SM-ST achieves an AUC of 65.15% (95% CI, 59.39%-70.92%) for consolidation detection, and the M3FM-MM-ST model achieves an AUC of 68.95% (95% CI, 63.26%-74.64%), a 3.80% improvement.
In particular, M3FM-SM-ST achieves an AUC of 81.63% (95% CI, 75.85%-87.41%) for CVD mortality prediction while the M3FM-MM-ST model achieves an AUC of 87.09% (95% CI, 82.00%-92.19%), which represents a 5.46% improvement.
While M3FM-SM-ST achieves an AUC of 89.24% (95% CI, 87.45%-91.04%) for CVD diagnosis, the M3FM-MM-ST model achieves an AUC of 92.38% (95% CI, 90.84%-93.92%), i.e., a 3.14% improvement.
Also, M3FM-SM-ST achieves an AUC of 76.76% (95% CI, 75.73%-77.79%) for reticular/reticulonodular opacities/honeycombing/fibrosis/scar detection, and the M3FM-MM-ST model achieves an AUC of 79.29% (95% CI, 78.30%-80.27%), a 2.53% improvement.
Similarly, M3FM-SM-ST achieves an AUC of 65.15% (95% CI, 59.39%-70.92%) for consolidation detection, and the M3FM-MM-ST model achieves an AUC of 68.95% (95% CI, 63.26%-74.64%), a 3.80% improvement.
C
This strategy usually leads to mismatch between two tasks, as SE is optimized in terms of speech quality like signal-to-noise ratio (SNR), while ASR is optimized in terms of speech intelligibility like cross-entropy and word error rate (WER).
In view of that, previous works propose to cascade SE and ASR as a joint network [43, 44], where SE serves as a denoising front-end to benefit downstream ASR.
In particular, some previous works [54, 55] observe that the enhanced speech from SE might not always yield good performance for downstream ASR, as some important ASR-related latent information in original noisy speech are suppressed by SE processing together with the noise, which is often undetected at speech enhancement stage but could be detrimental to the downstream ASR task.
To alleviate this issue, recent works [56, 57, 53] propose to fuse the distorted enhanced speech with original noisy speech to recover some over-suppressed information, which have achieved considerable improvements of ASR performance though still cannot clear out the distortions.
To this end, later works [49, 50, 51, 52] propose multi-task joint training to optimize SE and ASR modules simultaneously, which results in some improvements.
D
This framework facilitates our analysis in deriving a tight closed-form capacity upper bound. It reveals that the capacity grows logarithmically with the product of transmit element area, receive element area, and combined effects of 1/dm⁢n21superscriptsubscript𝑑𝑚𝑛21/{{d}_{mn}^{2}}1 / italic_d start_POSTSUBSCRIPT italic_m italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, 1/dm⁢n41superscriptsubscript𝑑𝑚𝑛41/{{d}_{mn}^{4}}1 / italic_d start_POSTSUBSCRIPT italic_m italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT, and 1/dm⁢n61superscriptsubscript𝑑𝑚𝑛61/{{d}_{mn}^{6}}1 / italic_d start_POSTSUBSCRIPT italic_m italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT over all transmit and receive antenna elements, where dm⁢nsubscript𝑑𝑚𝑛d_{mn}italic_d start_POSTSUBSCRIPT italic_m italic_n end_POSTSUBSCRIPT is the distance between each transmit element n𝑛nitalic_n and receive element m𝑚mitalic_m. Particularly, 1/dm⁢n61superscriptsubscript𝑑𝑚𝑛61/{{d}_{mn}^{6}}1 / italic_d start_POSTSUBSCRIPT italic_m italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT dominates in the near-field region whereas 1/dm⁢n21superscriptsubscript𝑑𝑚𝑛21/{{d}_{mn}^{2}}1 / italic_d start_POSTSUBSCRIPT italic_m italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT dominates in the far-field region.
We present numerical evaluations in this section, in which we first demonstrate the accuracy of our established channel models in capturing the essence of the wireless channel, and we then exhibit the capacity limit of the H-MIMO system using our derived results.
We finally evaluate the established channel models and capacity limit through extensive numerical simulations. The results validate the feasibility of our channel models and demonstrate the H-MIMO capacity limit, offering various insights for system designs.
In this article, we considered the point-to-point H-MIMO systems with arbitrary surface placements in a near-field LoS scenario, in which we established the generalized EM-domain near-field LoS channel models and studied the capacity limit. We first established effective, explicit, and computationally-efficient CD-CM and CI-CM, which are valid in approaching the integral form near-field LoS channel and in capturing the essence of physical wireless channel, such as the DoF of channel matrix. We then built an effective analytical framework for deriving the capacity limit. We showed that the capacity limit grows logarithmically with the product of TX and RX element areas and the combined effects of 1/d¯m⁢n21superscriptsubscript¯𝑑𝑚𝑛21/{\bar{d}_{mn}^{2}}1 / over¯ start_ARG italic_d end_ARG start_POSTSUBSCRIPT italic_m italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, 1/d¯m⁢n41superscriptsubscript¯𝑑𝑚𝑛41/{\bar{d}_{mn}^{4}}1 / over¯ start_ARG italic_d end_ARG start_POSTSUBSCRIPT italic_m italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT, and 1/d¯m⁢n61superscriptsubscript¯𝑑𝑚𝑛61/{\bar{d}_{mn}^{6}}1 / over¯ start_ARG italic_d end_ARG start_POSTSUBSCRIPT italic_m italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT over all M𝑀Mitalic_M and N𝑁Nitalic_N antenna elements. Our result can exactly capture the exact capacity, offering an effective mean for predicting the system performance.
In capacity limit evaluations, we focus on demonstrating the effectiveness and tightness of our derived upper bound in depicting the exact capacity limit. The numerical evaluations are performed in terms of the average transmit SNR, the element spacing, the TX-RX distance, and the number of transmit elements.
B
Those 3D plots can be useful when the validity of the RoA estimate is unclear. Indeed, although those estimate are theoretically certified, SOStab can fail to compute an RoA approximation, due to bad conditioning and numerical solver inaccuracy; to detect such behaviour, one can plot the graph of wdsubscript𝑤𝑑w_{d}italic_w start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT: if it is almost flat with values wd⁢(𝒙)≃1similar-to-or-equalssubscript𝑤𝑑𝒙1w_{d}(\boldsymbol{x})\simeq 1italic_w start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ( bold_italic_x ) ≃ 1 everywhere, then the RoA estimation failed.
Remember that the PLL dynamics are approximated by their Taylor expansion, while we study the exact SMIB model through a variable change. Hence, for the SMIB model, we need to specify which coordinates of 𝒙𝒙\boldsymbol{x}bold_italic_x are actually trigonometric functions of the original variable, by adding the input:
in case the system at hand involves trigonometric functions of phase variables θ1,…,θNsubscript𝜃1…subscript𝜃𝑁\theta_{1},\ldots,\theta_{N}italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_θ start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT, it is also possible to specify a phase index matrix 𝚯∈ℝN×2𝚯superscriptℝ𝑁2\boldsymbol{\Theta}\in\mathbb{R}^{N\times 2}bold_Θ ∈ blackboard_R start_POSTSUPERSCRIPT italic_N × 2 end_POSTSUPERSCRIPT whose first (resp. second) column consists of the indices of the sines (resp. cosines) of the θisubscript𝜃𝑖\theta_{i}italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT in the recasted variable 𝒙𝒙\boldsymbol{x}bold_italic_x.
Instead of performing Taylor expansions as in the previous section, it is also possible to directly tackle trigonometric functions, through the algebraic change of variables in Eq. (10) [12].
After the commands displayed in Section 4, a user has access to the description of the inner RoA estimate through the polynomial vdi⁢n⁢(0,𝐱)superscriptsubscript𝑣𝑑𝑖𝑛0𝐱v_{d}^{in}(0,\mathbf{x})italic_v start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i italic_n end_POSTSUPERSCRIPT ( 0 , bold_x ). It is also possible to compute an inner estimate and plot both on a 2D graph in the (θ,ω)𝜃𝜔(\theta,\omega)( italic_θ , italic_ω ) coordinates (see Fig. 6), with the following commands:
C
DTL-based IDS for IoT. Some IoT devices may have their functionality altered or hindered by malicious actors, and this will cause infected IoT devices to behave differently when defending against attacks as well as participating in them. Many solutions have been proposed to these problems [164, 165, 166, 167, 155, 156]. To evade the attack-oriented feature-selection process in IoT devices, generic features might be generated based on the header-field data in individual IP packets. The source has a feed-forward neural network model with multi-class classification; DTL is then used in the TD to encode high-dimensional category information to build a binary classifier [164]. The application of DTL to create an IDS for such a continuously evolving IoT environment was investigated by Yılmaz et al. [165]. Here, DTL was used in two contexts: knowledge transfer for creating appropriate intrusion algorithms for new devices, and knowledge transfer for identifying novel attack types. A routing protocol for low-power and lossy networks, which is designed for resource-constrained wireless networks, was used in this research as an example protocol, and specific attacks were made against it. Ullah and Mahmoud [166] described a way to design an IDS scheme that employs CNN, and this was tested on many intrusion datasets for IoT environments; DTL was used to build a target binary and multi-class classification using a multi-class pre-trained model as a SM. Mehedi et al. [167] suggested a DTL-based residual neural network (P-ResNet) IDS that can be well trained with only a minimal amount of TD data. The authors guaranteed its effectiveness by taking dependability performance-analysis factors into account, such as availability, efficacy, and scalability. The proposed P-ResNet-based IDS was found to reach an average detection rate of 87%, outperforming the schemes against which it was compared. The novelty of Guan et al.’s scheme [156] lies in it employing DTL on 5G IoT scenarios to train the TD without labels while retaining only 10% of it. The goal of this approach is to reach an accuracy closer to the results of a fully trained 5G IoT dataset.
To provide reliable intrusion detection against smart-grid threats, Zhang and Yan [120] suggested a domain-adversarial TL system. This approach adds domain-adversarial training to establish a mapping between the labeled SD and the unlabeled TD, using DAAN architecture, as depicted in Fig. 10, so that the classifiers can acquire knowledge in a new feature space while protecting themselves against unidentified threats. Using a real-world hardware-in-the-loop security testbed, a smart-grid cyberattack dataset was gathered to assess the proposed framework with various baseline classifiers. The findings showed that trained classifiers perform better against various kinds and locations of invisible threats, with improvement from 7% up to 36.8%. Table 3 presents a comprehensive comparison of DTL-based network intrusion detection systems, HIDSs, and DA schemes, each providing different performance improvements, methodologies, and considerations. The selection of SMs, datasets, performance metrics, and DTL techniques used in each scheme provides diverse insights into the implementation and effectiveness of these IDS methods. For instance, the selection of ResNet50 and GoogLeNet as SMs by Li et al. [80] achieved 15.41% and 15.68% improvements, respectively, with their CNN outperforming standard classifiers. This was achieved by converting the NSL-KDD dataset to image format, making it a DTL-based NIDS. Masum and Shahriar [84] also used KDDTest+ as a target dataset with VGG-16 and achieved an 8.65% improvement, demonstrating the potential effectiveness of DTL-based NIDSs. AlexNet was used by Sreelatha et al. [89] and attained impressive performance rates on the NSL-KDD and UNSW-NB15 datasets, suggesting the potential benefit of extended equilibrium optimizer (EEO) for updating DTL weights and the use of a SOTA optimizer for significant feature selection. The application of ROBERTa by Ünal et al. [121] led to a 15.5% improvement, exemplifying a DTL-based HIDS for multi-anomaly task detection, which further points to the versatility of DTL. \AcLSTM- and attention-based models also exhibited improvements in the work of Ajayi et al. [122] using an area under the ROC curve (AUC) metric for a host-based IDS scheme and the work of Nedelkoski et al. [123] using principal component analysis (PCA) for vector improvements. Interestingly, Wu et al. (2019) employed a two-stage learning process with ConvNet, resulting in a significant 22.02% improvement on the KDDTest-21 dataset. The traditional ML approach of Niu et al. [124] and the hyperparameter models of Zhao et al. [125] achieved 75% and 16% improvements, respectively, both using unsupervised DTL-based NIDSs. Notably, the latter used a novel clustering-enhanced hierarchical DTL. Li et al. [126] used a sequence-to-sequence (seq2seq) model and employed IP2Vect to convert string fields into vectors for visual clustering. Chadza et al. [102] applied HMMs to detect sequential network attacks, achieving a 59.95% improvement, and Fan et al. [127] used IoTDefender, a federated DTL-based IDS, showing an improvement of 3.13%. Lastly, Singh et al. [128] used the WideDeep model, achieving a notable 19.91% improvement, and Phan et al. [129] employed an multilayer perceptrons (MLP) that achieved a 43.58% improvement on their DDoS study.
Supervised techniques. The use of supervised procedures is the second approach. In these approaches, researchers investigate methods to minimize the dependency of the anomalous network traffic detection models on labeled data in the target network under the assumption that there is little labeled network traffic in the target network. To pre-train and fine-tune the model using the segmented UNSW-NB15 dataset, Singla et al. [133] designed a DNN and evaluated DTL’s suitability for DL model training with limited new labeled data for an NIDS. They discovered that when there is very little training data available, using the capability of the fine-tuning allows detection models to perform much better in recognizing new intrusions. Sun et al. [134] investigated the issue of classification of anomalous network traffic in the case of small samples from the standpoint of SD sample selection. They employed the maximum-entropy model Maxent as the fundamental classifier of the TrAdaBoost DTL method (Algorithm 1). The research application possibilities are, however, constrained due to the high labor costs associated with labeling the traffic data in the target network and the high time and computation costs associated with the secondary training model.
The effectiveness of employing FTL trending approaches with DL algorithms to power IDSs to protect IoT applications was shown by Otoum et al. [142], along with a thorough analysis of their use. Their work considered the internet of medical things (IoMT) as a use case. They found that the best performance was obtained when an IDS-based FTL-CNN model was used, and its accuracy reached up to 99.6%. Other works have also used FTL to solve many IDS issues [143, 144, 127, 139]. For example, Otoum et al. [144] suggested an FTL-based IDS for the purpose of protecting patients’ connected healthcare devices. The deep neural network (DNN) technique is used by the model to train the network, transfer information from connected edge models, and create an aggregated global model that is tailored to each linked edge device without sacrificing data privacy. Testing was conducted using the CICIDS2017 dataset, and the accuracy obtained was found to be up to 95.14%. Similarly, integration of FTL and DRL-based client selection was exploited by Cheng et al. [143]; the number of participating clients was restricted using a DRL-based client-selection technique. Their results show that the accuracy may be greatly increased and converge to 73% by excluding malicious clients from the model training. Fan et al. [127] suggested IoTDefender, an FTL-based IDS for 5G IoT. The layered and distributed architecture of IoTDefender is perfectly supported by 5G edge computing. IoTDefender aggregates data using FL and creates unique detection models using DTL. It allows all IoT networks to exchange data without compromising user privacy. The authors’ test findings showed that IoTDefender is more successful than conventional methods, with a detection accuracy of 91.93% and a reduced percentage of false positives when compared to a single unified model. Similarly, Zhao et al. [139] used FTL-based IDS, to build an IDS framework and exploit the aforementioned advantages of combining both FL and TL. Their experiments were conducted on the UNSW-NB15 dataset, and performance was found to reach an accuracy of 97.23%. In contrast to the work of Zhao et al., Otoum et al. [145] compared both FL and DTL for the same IDS method and concluded that TL-based IDS achieves the highest detection of ≈\approx≈94%, followed by FL-based IDS with ≈\approx≈92%.
DTL-based IDS for IoV. Many TL strategies have been presented in recent years. Li et al. [168] considered intrusion detection with various forms of assaults in an IoV system. According to their experimental findings, this model greatly increased detection accuracy: by at least 23% when compared to the traditional ML and DL techniques currently in use. The performance of TL models has recently been improved using the deep computational intelligence system [35]. The in-vehicle network new-generation labeled dataset presented by Kang et al. [169] is well suited to the application of DTL models; this is because DTL techniques have been found to perform better for time-series classification than other traditional ML or DL models [170, 171]. The distinctive contributions of Mehedi et al. [87] include creating a DTL-based LeNet model, evaluation taking into account real-world data, and the selection of effective attributes that are best suited to identifying harmful CAN signals and effectively detecting normal and abnormal behaviors. The output DTL models have demonstrated improved performance for real-time in-vehicle security. Another scheme presented by Otoum an Nayak [157] was developed to secure external networks and an IoV system. In this scheme, each network-connected vehicle acts as a packet inspector to help the DTL-based IDS; the SM is DBN and the TM is a DNN model, and the attacks are discovered, logged, and added to a cloud-based signature database. The packet inspector is supported by a blacklist of intrusion signatures that are installed in the connected vehicles.
D
It should be noted that the trajectories generated by our algorithm may not necessarily correspond to the shortest paths between the initial and final configurations, as shown in Fig. 18.
Moreover, when the robot switches between the modes, the control input vector (23a) changes value discontinuously.
When the robot switches from the obstacle-avoidance mode to the move-to-target mode, the value of the hit point remains unchanged.
In the proposed scheme, similar to [19], depending upon the value of the mode indicator m∈{−1,0,1}=:𝕄,m\in\{-1,0,1\}=:\mathbb{M},italic_m ∈ { - 1 , 0 , 1 } = : blackboard_M , the robot operates in two different modes, namely the move-to-target mode (m=0)𝑚0(m=0)( italic_m = 0 ) when it is away from the modified obstacles and the obstacle-avoidance mode (m∈{−1,1})𝑚11(m\in\{-1,1\})( italic_m ∈ { - 1 , 1 } ) when it is in the vicinity of an modified obstacle. In the move-to-target mode, the robot moves straight towards the target, whereas during the obstacle-avoidance mode the robot moves around the nearest modified obstacle, either in the clockwise direction (m=1)𝑚1(m=1)( italic_m = 1 ) or in the counter-clockwise direction (m=−1)𝑚1(m=-1)( italic_m = - 1 ). We utilize a vector joining the center of the robot and its projection on the modified obstacle-occupied workspace to select between the modes and assign the direction of motion while operating in the obstacle-avoidance mode.
where 𝐱𝐱\mathbf{x}bold_x is the location of the center of the robot and 𝐮∈ℝ2𝐮superscriptℝ2\mathbf{u}\in\mathbb{R}^{2}bold_u ∈ blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT is the control input.
A
The training script, inference script, and trained model have been publicly available at https://github.com/bowang-lab/MedSAM. A permanent version is released on Zenodo [medsam-zenodo].
This research was enabled in part by computing resources provided by the Digital Research Alliance of Canada.
The training and validating datasets used in this study are available in the public domain and can be downloaded via the links provided in the Supplementary Table 16-17. Source data are provided with this paper in the Source Data file. We confirmed that All the image datasets in this study are publicly accessible and permitted for research purposes.
This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC, RGPIN-2020-06189 and DGECR-2020-00294) and CIFAR AI Chair programs.
We employed the nnU-Net to conduct all U-Net experiments, which can automatically configure the network architecture based on the dataset properties. In order to incorporate the bounding box prompt into the model, we transformed the bounding box into a binary mask and concatenated it with the image as the model input. This function was originally supported by nnU-Net in the cascaded pipeline, which has demonstrated increased performance in many segmentation tasks by using the binary mask as an additional channel to specify the target location. The training settings followed the default configurations of 2D nnU-Net. Each model was trained on one A100 GPU with 1000 epochs and the last checkpoint was used as the final model.
C
The proof follows similar steps to Proposition 2 given the condition that the nearest gNB provides higher biased-received-desired-signal power than the nearest LoS and NLoS IAB-nodes.
Having obtained the association probabilities, we can obtain the PDFs of the serving distance (i.e., the distance of the typical UE to its serving gNB or IAB-node) in the following proposition.
The backhaul link is the wireless link between the gNB and the IAB-node, whereas the access link is between the UE and the gNB or the IAB-node. More details of the 3GPP architecture can be found in our recent study [2]. An illustration of the proposed model is shown in Fig. 1. In this study, the gNBs and IAB-nodes formed two different tiers. Multi-hop backhaul IAB networks will not be considered because of the challenge of network configuration, and the feasibility of using stochastic geometry in this scenario [16].
Given the statistics of the contact distance, we now derive the probabilities of the typical UE associated with an LoS IAB-node, an NLoS IAB-node, and a gNB in the following propositions.
Proposition 4: Conditioned on the typical UE is served by its nearest gNB at 𝐱msubscript𝐱normal-m\mathbf{x}_{\mathrm{m}}bold_x start_POSTSUBSCRIPT roman_m end_POSTSUBSCRIPT, and the corresponding PDF of the serving distance is written as
A
Note that the vector-type BP-SLAM filters [12, 13] with the minor modifications, abbreviated as VBP-SLAM2, is equivalent to the SBP-PMB-SLAM, as discussed in Sec. IV-F.
The simulation results show that the proposed set-type BP PMB-SLAM filter outperforms the vector-type BP-SLAM filter [12, 13, 14], in scenarios with informative Poisson point process (PPP) birth.
Note that the vector-type BP-SLAM filters [12, 13] with the minor modifications, abbreviated as VBP-SLAM2, is equivalent to the SBP-PMB-SLAM, as discussed in Sec. IV-F.
We introduce the simulation setup for evaluating the SLAM filters, and subsequently the results are discussed.
With the developed set-type BP, we revisit the PMB- and MB-SLAM filters. We first introduce auxiliary variables to factorize their joint SLAM and data association distribution,
C
A lot of semantic communication systems have been proposed recently. They are specific to particular tasks with different sources and task requirements. For the text transmission task, knowledge-graph based [9] and deep learning (DL) based[6] semantic communication systems were investigated, in which the semantic features of text are extracted through knowledge graph and DL techniques, respectively, transmitted over the air, and received to recover the meaning of the source at the receiver. In a similar way, semantic communication systems for the transmission of other types of sources, like image[7, 10], speech[11], and video[12, 13], have shown better performance than conventional communications. In addition, intelligent tasks to be executed at the receiver have been paid attention as well, such as image retrieval [14], image classification[15], and speech recognition[16]. The semantic communication systems for these tasks also achieve higher transmission efficiency than conventional ones since only task-related information are transmitted.
The aforementioned semantic communication systems are designed for single-modal tasks where a single user transmits single-modal data. For multi-modal tasks, multiple users transmit different modalities of data, and the received signals will be fused at the receiver to serve the task. In this case, multiple users jointly decide the task performance, which brings challenges to the system design as well as the resource allocation. Xie et. al. took the visual question answering (VQA) task as an example, and then proposed a unified framework, named DeepSC-VQA, to support multi-modal tasks[17]. In this system, the text and image from two users are fed into the Transformer based network, received at the receiver, and then fused though a layer-wise Transformer to predict the answer.
We focus on two types of tasks in this paper, including a single-modal task and a bimodal task. However, the proposed algorithm can be extended to the case of multiple multi-modal tasks easily.
{Bi}}^{\rm{Con}})=(3,3)( italic_N start_POSTSUBSCRIPT roman_Si end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_Sem end_POSTSUPERSCRIPT , italic_N start_POSTSUBSCRIPT roman_Bi end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_Sem end_POSTSUPERSCRIPT ) = ( italic_N start_POSTSUBSCRIPT roman_Si end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_Con end_POSTSUPERSCRIPT , italic_N start_POSTSUBSCRIPT roman_Bi end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_Con end_POSTSUPERSCRIPT ) = ( 3 , 3 ) in the simulation. The QoE of each system increases with |ℳ|ℳ|\mathcal{M}|| caligraphic_M | since more users will be served with increasing available channels. In addition, we observe that (i) the gap between the sum QoE and the upper bound becomes wider as |ℳ|ℳ|\mathcal{M}|| caligraphic_M | increases, (ii) the QoE of the semantic system becomes closer to the upper bound, and (iii) the QoE of the conventional system is much smaller than that of the semantic system. The reason is that the bimodal users with the conventional system can hardly be served since it is difficult for them to achieve the transmission rate threshold, especially for image transmission users. As shown in Fig. 10(b), the number of served users with bimodal tasks is always zero, which indicates that the conventional system has no advantage in the tasks that focus on the effective execution at the receiver and the semantic system has much stronger compression ability. Thus, to improve the overall QoE of the network, more resources will be assigned to semantic users, facilitating the resource utilization maximization. Consequently, the QoE of the semantic system will approach the upper bound due to more available channels but limit users, and that of the conventional system will approach 3 as there are 3 single-modal users in total. Moreover, since there should be more bimodal users with the conventional systems served with increasing |ℳ|ℳ|\mathcal{M}|| caligraphic_M | while none of them achieves the requirement, the sum QoE becomes further away from the upper bound. In addition, as two users in a bimodal user pair joint decide the task performance, the bimodal user pairs are more competitive than the users with the single-modal task, especially when the number of available channels is only 2. Hence, the number of served users with the bimodal task is larger than that with the single-modal task when |ℳ|=2ℳ2|\mathcal{M}|=2| caligraphic_M | = 2.
For the bimodal task, we take VQA task as an example and adopt the DeepSC-VQA model [17]. This task involves two users for text and image transmission, respectively. The two users first extract the semantic symbols from text and image through the DeepSC-VQA transmitter, respectively, and then send them to the BS. The received semantic symbols of text and image will be fused by the DeepSC-VQA receiver to predict the answer. As the two users jointly decide the task performance, the answer accuracy are modeled as a function with respect to the average number of transmitted semantic symbols per word for the text transmission user, the average number of transmitted semantic symbols per image for the image transmission user, and the SINR of the two users, i.e.,
A
Engelson developed an integrated environment that combines geometric design and system modeling tools to assist engineers in constructing and verifying large, moving rigid-body assemblies [12]. However, these approaches may result in unrealistic designs by ignoring function-sharing. Ulrich’s work stands out as an exception [10], as it demonstrates a systematic approach to merge multiple functions, abstracted by different system-level components, in fewer geometric parts. Despite the numerous approaches proposed for system-based geometric design [13, 14, 10, 12, 15, 16], none of them formally introduce the concept of consistency between system and geometric designs, nor do they provide a systematic approach for assessing the validity of the geometric design with respect to the target system behavior.
Figure 1 depicts the process of designing a 3D suspension mechanism based on a system design. After creating an LPM in Modelica [9], mechanical parts are used to realize the lumped components and obtain the expected behaviors. The stiffness and damping of the absorber are modeled by a spring-damper pair, without considering the precise geometric realization. Two potential geometric realization options for the absorber are presented in the library. While there might be various options for the absorber, a designer must verify that the selected option behaves as intended by the lumped component ( 1 in the figure) before the assembly process or design qualified parts guided by the three model consistency conditions explained in Section 3. Additionally, it is crucial to confirm that the final geometric assembly behaves as intended by the LPM (depicted as 2 in the figure) to ensure a qualified design. Currently, the only reliable way to compare design behaviors is to simulate both the LPM and DPM and compare their differential equation solutions a posteriori, which is computationally prohibitive for large-scale models, not to mention the additional challenges related to selecting appropriate time-steps, stability, and convergence. The goal of this paper is to propose a systematic method to check consistency between the system models and CAD/CAE models.
In computer graphics, for instance, deformable objects like cloth fabrics and soft tissues are often converted to lumped mass-spring models for faster simulations due to the simplicity and efficiency of LPMs [17, 18, 19]. Gelder [20] developed a lumped-spring element based on the geometric angle and length information of 2D linear triangular finite elements, while Vincent et al. [21] extended the method to rectangular finite elements. In both cases, the solutions of the converted LPM and the original finite element model can be directly compared. Suriya et al. in [22] proposed a method for converting 3D deformable objects from finite element models to LPMs by minimizing the difference of the stiffness matrices of the two models. However, the dimension of the LPM solution is often much smaller than that of the DPM after spatial discretization, making direct comparison challenging. In other words, a one-to-one correspondence between the solution dimensions of the two models is typically not guaranteed. This limits the applicability of existing methods to the model solution comparison problem addressed in this paper.
Comparing the solutions of LPMs and DPMs is a common requirement in model conversion problems, where the two models must be converted to each other with tolerable differences in their respective solutions.
We develop a simulation-free scheme for checking the consistency between LPMs and DPMs based on the definition proposed above. This idea is to compute a priori error bounds between the solutions of both model types by comparing their parameters, circumventing the costly process of solving differential equations.
C
Bayesian learning, which treats the model parameters as random variables. In Bayesian learning, the distribution over the model parameters is optimized by introducing a data-independent, information-theoretic, regularizer that enforces adherence to a prior distribution (see, e.g., [4]). The optimized distribution is then used to make decisions via ensembles of models that account for the epistemic uncertainty caused by the limited availability of data. However, when the model – prior distribution and likelihood function – are misspecified, Bayesian learning is no longer guaranteed to provide well-calibrated decisions [5, 6, 7]. In practice, model misspecification is hard to ascertain, and hence it is important to develop versions of Bayesian learning that more directly address the criterion of calibration.
For deep learning tools to be widely adopted in applications with strong reliability requirements, such as engineering or health care, it is critical that data-driven models be able to quantify the likelihood of producing incorrect decisions [1, 2]. This is currently an open challenge for conventional frequentist learning, which is known to produce overconfident, and hence poorly calibrated, outputs, especially in the presence of limited training data [3]. This paper contributes to the ongoing line of work concerned with the introduction of novel methodologies for the design of well-calibrated machine learning models.
As discussed in Sec. II-A2, by estimating epistemic uncertainty via ensembling, Bayesian learning can generate better calibrated models as compared to frequentist learning. However, it is well known that the improvements in calibration brought by Bayesian learning are predicated on the assumption that the model – prior distribution and likelihood function – are well specified, providing a sufficiently accurate match with the ground-truth data generation distribution [5, 7, 14]. In light of this limitation, we propose to integrate Bayesian learning with ECE-based regularization, in a manner akin to CA-FNN, in order to enhance the calibration of neural networks trained via Bayesian learning. Accordingly, we refer to the proposed approach as CA-BNN.
Bayesian learning, which treats the model parameters as random variables. In Bayesian learning, the distribution over the model parameters is optimized by introducing a data-independent, information-theoretic, regularizer that enforces adherence to a prior distribution (see, e.g., [4]). The optimized distribution is then used to make decisions via ensembles of models that account for the epistemic uncertainty caused by the limited availability of data. However, when the model – prior distribution and likelihood function – are misspecified, Bayesian learning is no longer guaranteed to provide well-calibrated decisions [5, 6, 7]. In practice, model misspecification is hard to ascertain, and hence it is important to develop versions of Bayesian learning that more directly address the criterion of calibration.
In a separate line of work, recent studies [8, 9] have shown that introducing a data-dependent regularizer that penalizes calibration errors can improve the calibration performance of conventional frequentist learning. However, these studies are limited to decisions made using single models, and they are thus by design not suitable to capture epistemic uncertainty by means of ensembling over multiple models as in Bayesian learning.
D
4:     Each reliable agent i𝑖iitalic_i, i∈ℛ𝑖ℛi\in\mathcal{R}italic_i ∈ caligraphic_R, takes wi,k+1=xi,ksubscript𝑤𝑖𝑘1subscript𝑥𝑖𝑘w_{i,k+1}={x_{i,k}}italic_w start_POSTSUBSCRIPT italic_i , italic_k + 1 end_POSTSUBSCRIPT = italic_x start_POSTSUBSCRIPT italic_i , italic_k end_POSTSUBSCRIPT with an uncoordinated triggered probability pisubscript𝑝𝑖p_{i}italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and keeps wi,k+1=wi,ksubscript𝑤𝑖𝑘1subscript𝑤𝑖𝑘w_{i,k+1}={w_{i,k}}italic_w start_POSTSUBSCRIPT italic_i , italic_k + 1 end_POSTSUBSCRIPT = italic_w start_POSTSUBSCRIPT italic_i , italic_k end_POSTSUBSCRIPT with the probability 1−pi1subscript𝑝𝑖1-p_{i}1 - italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
5:     Each reliable agent i𝑖iitalic_i, i∈ℛ𝑖ℛi\in\mathcal{R}italic_i ∈ caligraphic_R, updates its current model according to Steps 4-5 of Algorithm 1.
5:     Each reliable agent i𝑖iitalic_i, i∈ℛ𝑖ℛi\in\mathcal{R}italic_i ∈ caligraphic_R, updates its current model according to Steps 4-5 in Algorithm 1.
2:     Each reliable agent i𝑖iitalic_i, i∈ℛ𝑖ℛi\in\mathcal{R}italic_i ∈ caligraphic_R, exchanges information according to Step 2 in Algorithm 1.
5:     Each reliable agent i𝑖iitalic_i, i∈ℛ𝑖ℛi\in\mathcal{R}italic_i ∈ caligraphic_R, updates its current local model according to the local proximal mapping step:
A