Dataset Viewer
Auto-converted to Parquet
context
stringlengths
100
5.69k
A
stringlengths
100
3.76k
B
stringlengths
100
3.61k
C
stringlengths
100
5.61k
D
stringlengths
100
3.87k
label
stringclasses
4 values
C2⁢-WORDsuperscriptC2-WORD\textrm{C}^{2}\textrm{-WORD}C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT -WORD outperforms
A2⁢RCsuperscriptA2RC{\textrm{A}}^{2}{\textrm{RC}}A start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT RC and WORD in the sense of WNG.
selection of A2⁢RCsuperscriptA2RC{\textrm{A}}^{2}{\textrm{RC}}A start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT RC is optimal in the sense
the existing A2⁢RCsuperscriptA2RC{\textrm{A}}^{2}{\textrm{RC}}A start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT RC
A2⁢RCsuperscriptA2RC{\textrm{A}}^{2}{\textrm{RC}}A start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT RC (in the sense of WNG).
D
The two layer CNN S2I achieved worse even compared with the 1D variants, indicating that increase of the S2I depth is not beneficial.
For the purposes of this paper we use a variation of the database111https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition in which the EEG signals are split into segments with 178178178178 samples each, resulting in a balanced dataset that consists of 11500115001150011500 EEG signals.
The two layer CNN S2I achieved worse even compared with the 1D variants, indicating that increase of the S2I depth is not beneficial.
The spectrogram S2I results are in contrary with the expectation that the interpretable time-frequency representation would help in finding good features for classification.
The names of the classes are depicted at the right along with the predictions for this example signal.
C
UAVs have several power levels and altitude levels. In the midst of extreme environments, UAVs cannot change its voltage dramatically but merely change to the adjacent power level [12]. Similarly, the altitude changing also has a limitation that only adjacent altitude level conversion is permitted in each move. We denote power set and altitude set to be P={P1,…,Pk,…,Pn⁢p}𝑃subscript𝑃1…subscript𝑃𝑘…subscript𝑃𝑛𝑝P=\{P_{1},...,P_{k},...,P_{np}\}italic_P = { italic_P start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_P start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , … , italic_P start_POSTSUBSCRIPT italic_n italic_p end_POSTSUBSCRIPT } and h={h1,…,hk,…,hn⁢h}ℎsubscriptℎ1…subscriptℎ𝑘…subscriptℎ𝑛ℎh=\{h_{1},...,h_{k},...,h_{nh}\}italic_h = { italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , … , italic_h start_POSTSUBSCRIPT italic_n italic_h end_POSTSUBSCRIPT }, respectively, where n⁢p𝑛𝑝npitalic_n italic_p is the number of power levels, and n⁢h𝑛ℎnhitalic_n italic_h is the number of altitude levels. We assume that the gap between different levels of power and altitude are equal. Let Δ⁢PΔ𝑃\Delta Proman_Δ italic_P and Δ⁢hΔℎ\Delta hroman_Δ italic_h denote the distance value of adjacent power levels and altitude levels, respectively.
In post-disaster scenarios, a great many of UAVs are required to support users [4]. Therefore, we propose aggregative game theory into such scenarios and permit UAV to learn in the constrained strategy sets. Because the aggregative game can integrate the impact of all other UAVs on one UAV, it reduces the complexity of receiving information and reduces the data processing capacity of UAVs. For instance, in a conventional game applied a scenario with N UAVs, it needs to analyze N strategies which decide noise and coverage sizes from each other individual UAV. However aggregative game only needs to process the integrated noise and coverage sizes of all other UAVs. Such an advantage is more obvious when the number of UAVs is extremely large since figuring out each others’ strategies is unrealistic [8]. In terms of constrained strategy sets, due to environmental factors such as violent winds [11] and tempestuous rainstorms, the action set of UAVs has a restriction that cannot switch rapidly between extreme high power or elevate altitude to low ones, but only levels adjacent to them [12]. For instance, the power can change from 1⁢m⁢W1𝑚𝑊1mW1 italic_m italic_W to 1.5⁢m⁢W1.5𝑚𝑊1.5mW1.5 italic_m italic_W in the first time slot and from 1.5⁢m⁢W1.5𝑚𝑊1.5mW1.5 italic_m italic_W to 2⁢m⁢W2𝑚𝑊2mW2 italic_m italic_W in the next one, but it cannot alter it directly from 1⁢m⁢W1𝑚𝑊1mW1 italic_m italic_W to 2⁢m⁢W2𝑚𝑊2mW2 italic_m italic_W. Therefore, the aggregative game with constrained sets is an ideal model for post-disaster scenarios.
Fig. 12 presents the sketch diagram of a UAV’s utility with power altering. The altitudes of UAVs are fixed. When other UAVs’ power profiles are altering, the interference increases and the curve moves down. The high interference will reduce the utility of the UAV. Fig. 12 also shows that utility decreases and increases with power improving. Small and large power both provide high utilities, which is because small power will save energy and large power will increase SNR. The UAV might select the largest power to increase utility. However, The more power one UAV uses, the more interference other UAVs will receive and other UAVs’ utilities will reduce. For the sake of enlarging the global utility, the largest power is not the optimal strategies for the whole UAV ad-hoc network. The best power will locate in some values that smaller than the largest power (The optimal value in the figure is a sketch value).
When UAVs need communications, and the signal to noise rate (SNR) mainly determines the quality of service. UAVs’ power and inherent noise are interferences for each other. Since there are hundreds of UAVs in the system, each UAV is unable to sense all the other UAVs’ power explicitly, but only sense and measure aggregative interference and treat it as an integral influence. Though increasing power can improve SNR, excessively large power causes more energy consumption and results in less running time. Therefore, proper power control for UAVs is needed to be carefully designed.
To investigate UAV networks, novel network models should jointly consider power control and altitude for practicability. Energy consumption, SNR and coverage size are key points to decide the performance of a UAV network [6]. Respectively, power control determines the signal to energy consumption and noise ratio (SNR) of UAV; altitude decides the number of users that can be supported [7], and it also determines the minimum value of SNR. It is because the higher altitude a UAV is, the more users it can support, and the higher SNR it requires. Therefore, power control and altitude are two essential factors. There have been extensive researches building models focusing on various network influence factors. For example, the work in [8] established a system model with channels and time slots selections. Authors of [9] constructed a coverage model which considered each agent’s coverage size on a network graph. However, such models usually consider only one specific characteristic of networks but ignore systems’ multiplicity, which would bring great loss in practice. Since UAVs will consume too much power to improve SNR or to increase coverage size. Even though UAV systems in 3D scenario with multi-factors of coverage and charging strategies have been studied by [7], it overlooks power control which means that UAVs might wast lots of energy. To sum up, in terms of UAV ad-hoc networks in post-disaster scenarios, power control and altitude which determine energy consumption, SNR, and coverage size ought to be considered to make the model credible [10].
C
This section discusses the advancements in semantic image segmentation using convolutional neural networks (CNNs), which have been applied to interpretation tasks on both natural and medical images (Garcia-Garcia et al., 2018; Litjens et al., 2017). Although artificial neural network-based image segmentation approaches have been explored in the past using shallow networks (Reddick et al., 1997; Kuntimad and Ranganath, 1999) as well as works which relied on superpixel segmentation maps to generate pixelwise predictions (Couprie et al., 2013), in this work, we focus on deep neural network based image segmentation models which are end-to-end trainable. The improvements are mostly attributed to exploring new neural architectures (with varying depths, widths, and connectivity or topology) or designing new types of components or layers.
Next, encoder-decoder segmentation networks (Noh et al., 2015) such as SegNet, were introduced (Badrinarayanan et al., 2015). The role of the decoder network is to map the low-resolution encoder feature to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies in the manner in which the decoder upsamples the lower resolution input feature maps. Specifically, the decoder uses pooling indices (Figure 5) computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. The architecture (Figure 5) consists of a sequence of non-linear processing layers (encoder) and a corresponding set of decoder layers followed by a pixel-wise classifier. Typically, each encoder consists of one or more convolutional layers with batch normalization and a ReLU non-linearity, followed by non-overlapping max-pooling and sub-sampling. The sparse encoding due to the pooling process is upsampled in the decoder using the max-pooling indices in the encoding sequence.
In order to preserve the contextual spatial information within an image as the filtered input data progresses deeper into the network, Long et al. (2015) proposed to fuse the output with shallower layers’ output. The fusion step is visualized in Figure 4.
The quantitative evaluation of segmentation models can be performed using pixel-wise and overlap based measures. For binary segmentation, pixel-wise measures involve the construction of a confusion matrix to calculate the number of true positive (TP), true negative (TN), false positive (FP), and false negative (FN) pixels, and then calculate various metrics such as precision, recall (also known as sensitivity), specificity, and overall pixel-wise accuracy. They are defined as follows:
As one of the first high impact CNN-based segmentation models, Long et al. (2015) proposed fully convolutional networks for pixel-wise labeling. They proposed up-sampling (deconvolving) the output activation maps from which the pixel-wise output can be calculated. The overall architecture of the network is visualized in Figure 3.
D
The UAVs’ trajectory on the x⁢y𝑥𝑦xyitalic_x italic_y-plane is assumed to follow the Smooth-Turn mobility model [34] that can capture the mobility of UAVs in the scenarios like patrolling. In this model, the UAV circles around a certain point on the horizontal plane (xy-plane) for an exponentially distributed duration until the UAV selects a new center point with the turning radius whose reciprocal obeys the normal distribution 𝒩⁢(0,σr2)𝒩0subscriptsuperscript𝜎2𝑟\mathcal{N}(0,\sigma^{2}_{r})caligraphic_N ( 0 , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ). According to [34], σr2subscriptsuperscript𝜎2𝑟\sigma^{2}_{r}italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT plays an important role in the degree of randomness. The UAVs are in the state of uniform linear motion in the vertical direction with different velocity vt⁢(r),zsubscript𝑣𝑡𝑟𝑧v_{t(r),z}italic_v start_POSTSUBSCRIPT italic_t ( italic_r ) , italic_z end_POSTSUBSCRIPT, where vt⁢(r),zsubscript𝑣𝑡𝑟𝑧v_{t(r),z}italic_v start_POSTSUBSCRIPT italic_t ( italic_r ) , italic_z end_POSTSUBSCRIPT obeys the uniform distribution vt⁢(r),z∼𝒰⁢(vt⁢(r),z,min,vt⁢(r)⁢z,max)similar-tosubscript𝑣𝑡𝑟𝑧𝒰subscript𝑣𝑡𝑟𝑧minsubscript𝑣𝑡𝑟𝑧maxv_{t(r),z}\sim\mathcal{U}(v_{t(r),z,\text{min}},v_{t(r)z,\text{max}})italic_v start_POSTSUBSCRIPT italic_t ( italic_r ) , italic_z end_POSTSUBSCRIPT ∼ caligraphic_U ( italic_v start_POSTSUBSCRIPT italic_t ( italic_r ) , italic_z , min end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT italic_t ( italic_r ) italic_z , max end_POSTSUBSCRIPT ). Moreover, aiming to maintain the communication link with the r-UAV, the t-UAVs keep their positions in a limited region at arbitrary time where the distance between the t-UAV and the r-UAV is less than Dr,maxsubscript𝐷r,maxD_{\text{r,max}}italic_D start_POSTSUBSCRIPT r,max end_POSTSUBSCRIPT. The distance between UAVs is also limited no less than Dr,minsubscript𝐷r,minD_{\text{r,min}}italic_D start_POSTSUBSCRIPT r,min end_POSTSUBSCRIPT to ensure the flight safety. The relationship between the position and attitude (equations (8)-(10) in [35]) is used to determine the UAVs’ attitude.
A conceptual frame structure is designed which contains two types of time slots. One is the exchanging slot (e-slot) and the other is the tracking slot (t-slot). Let us first focus on the e-slot. It is assumed that UAVs exchange MSI every T𝑇Titalic_T t-slots, i.e., in an e-slot, to save resource for payload transmission. In the MSI exchanging period of the e-slot t𝑡titalic_t, the r-UAV exchanges its historical MSI with each t-UAV and the t-UAV only exchanges its historical MSI with r-UAV over the low-rate control links that work in the lower-frequency band [36]. Then t-UAVs and r-UAV perform codeword selection. Employing the GP-based MSI prediction algorithm proposed in [31], each t-UAV predicts the MSI of r-UAV and r-UAV predicts the MSI of all t-UAVs in the next T𝑇Titalic_T t-slots. In the tracking error bounding period, the UAVs estimate the TE of AOAs and AODs based on the GP prediction error. Compared to e-slot, t-slot does not have the MSI exchanging, prediction and error bounding, but has the TE-aware codeword selection. Specifically, in t-slot the t-UAVs and r-UAV achieve the adaptive beamwidth control against AODs/AOAs prediction errors by employing the TE-aware codeword selection. Compared to the motion-aware protocol in [31], the new TE-aware protocol can be applied to the UAV mmWave network with higher mobility including random trajectories and high velocity. Since the new TE-aware protocol contains the error bounding and TE-aware codeword selection periods, it is able to deal with the beam tracking error caused by high mobility of UAVs. Next, we will detail how to bound the TE and how to select the proper codeword with suitable beamwidth against the TE in the following subsections.
Moreover, the data block of MSI is set as BMSI=nMSI×T×BMSIsubscript𝐵MSIsubscript𝑛MSI𝑇subscript𝐵MSIB_{\text{MSI}}=n_{\text{MSI}}\times T\times B_{\text{MSI}}italic_B start_POSTSUBSCRIPT MSI end_POSTSUBSCRIPT = italic_n start_POSTSUBSCRIPT MSI end_POSTSUBSCRIPT × italic_T × italic_B start_POSTSUBSCRIPT MSI end_POSTSUBSCRIPT bits, where nMSI=6subscript𝑛MSI6n_{\text{MSI}}=6italic_n start_POSTSUBSCRIPT MSI end_POSTSUBSCRIPT = 6 is the dimension of MSI at each slot, T=50𝑇50T=50italic_T = 50 is the number of slots between the adjacent MSI exchanging, and each dimension of MSI at each slot is represented by BMSI=4subscript𝐵MSI4B_{\text{MSI}}=4italic_B start_POSTSUBSCRIPT MSI end_POSTSUBSCRIPT = 4 bits. The transmission rate of lower band is set as CLB=500subscript𝐶LB500C_{\text{LB}}=500italic_C start_POSTSUBSCRIPT LB end_POSTSUBSCRIPT = 500 kbps [38], the data block is set as Bdata=1subscript𝐵data1B_{\text{data}}=1italic_B start_POSTSUBSCRIPT data end_POSTSUBSCRIPT = 1 Mbit, Cavesubscript𝐶aveC_{\text{ave}}italic_C start_POSTSUBSCRIPT ave end_POSTSUBSCRIPT is the average rate of mmWave band, Dk,maxsubscript𝐷𝑘D_{k,\max}italic_D start_POSTSUBSCRIPT italic_k , roman_max end_POSTSUBSCRIPT is the maximum distance between the t-UAV and the r-UAV, and c𝑐citalic_c is the velocity of light. As the computational complexity of the algorithms for the r-UAV is higher than that of t-UAVs, the local processing time mainly depends on the time for the r-UAV to perform the beam tracking algorithms, which is estimated based on the times of multiplication and addition, and the CPU of UAVs. The CPU Intel i7-8550u [39] with processor base frequency 1.8 GHz is considered in the simulation, which is adopted by a commonly-used onboard computer “Mainfold 2” supporting many types of UAVs such as DJI Matrice 600 pro, DJI Matrice 600 210 series, and so on [40].
Thanks to the integrated sensors, such as inertial measurement unit (IMU) and global position system (GPS), the UAV is able to derive its own MSI. However, the r-UAV also needs the MSI of all t-UAVs and each t-UAV needs the r-UAV’s MSI for beam tracking, which is challenging for the r-UAV/t-UAVs.
Specifically, the r-UAV/t-UAV’s historical MSI is first exchanged with the t-UAV/r-UAV over a lower-frequency band and then the t-UAV will predict the future MSI of the r-UAV based on the historical MSI by using the GP-based MSI prediction model.
C
The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditional mean square error. This leads the nonnegative supermartingale convergence theorem not to be applied directly
III. The co-existence of random graphs, subgradient measurement noises, additive and multiplicative communication noises are considered. Compared with the case with only a single random factor, the coupling terms of different random factors inevitably affect the mean square difference between optimizers’ states and any given vector. What’s more, multiplicative noises relying on the relative states between adjacent local optimizers make states, graphs and noises coupled together. It becomes more complex to estimate the mean square upper bound of the local optimizers’ states (Lemma 3.1). We firstly employ the property of conditional independence to deal with the coupling term of different random factors. Then, we prove that the mean square upper bound of the coupling term between states, network graphs and noises depends on the second-order moment of the difference between optimizers’ states and the given vector. Finally, we get an estimate of the mean square increasing rate of the local optimizers’ states in terms of the step sizes of the algorithm (Lemma 3.2).
We first estimate the mean square increasing rate of the states in Lemma III.2, and then substitute this rate into the recursive inequality (11) of the conditional mean square error between the state and the global optimal solution.
The inner product of the subgradients and the error between local optimizers’ states and the global optimal solution inevitably exists in the recursive inequality of the conditional mean square error. This leads the nonnegative supermartingale convergence theorem not to be applied directly
To this end, we estimate the upper bound of the mean square increasing rate of the local optimizers’ states at first (Lemma 3.2). Then we substitute this upper bound into the Lyapunov function difference inequality of the consensus error, and obtain the estimated convergence rate of mean square consensus (Lemma 3.3). Further, the estimations of these rates are substituted into the recursive inequality of the conditional mean square error between the states and the global optimal solution. Finally, by properly choosing the step sizes, we prove that the states of all local optimizers converge to the same global optimal solution almost surely by the non-negative supermartingale convergence theorem. The key lies in that the algorithm step sizes should be chosen carefully to eliminate the possible increasing effect caused by the linear growth of the subgradients and to balance the rates between achieving consensus and seeking the optimal solution.
D
H1,H2subscript𝐻1subscript𝐻2H_{1},H_{2}italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and H𝐻Hitalic_H are defined as H1⁢(s)=Kv⁢Kp⁢G⁢(s)subscript𝐻1𝑠subscript𝐾𝑣subscript𝐾𝑝𝐺𝑠H_{1}(s)=K_{v}K_{p}G(s)italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_s ) = italic_K start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT italic_K start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT italic_G ( italic_s ), H2⁢(s)=Kv⁢Kp⁢G⁢(s)subscript𝐻2𝑠subscript𝐾𝑣subscript𝐾𝑝𝐺𝑠H_{2}(s)=K_{v}K_{p}G(s)italic_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_s ) = italic_K start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT italic_K start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT italic_G ( italic_s ), and H⁢(s)=Kv⁢s⁢G⁢(s)+1+Kv⁢Kp⁢G⁢(s)𝐻𝑠subscript𝐾𝑣𝑠𝐺𝑠1subscript𝐾𝑣subscript𝐾𝑝𝐺𝑠H(s)=K_{v}sG(s)+1+K_{v}K_{p}G(s)italic_H ( italic_s ) = italic_K start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT italic_s italic_G ( italic_s ) + 1 + italic_K start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT italic_K start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT italic_G ( italic_s ).
One can easily obtain the transfer function from the reference trajectories to the actual position and velocity as
where vs,ksubscript𝑣𝑠𝑘v_{s,k}italic_v start_POSTSUBSCRIPT italic_s , italic_k end_POSTSUBSCRIPT is the sampled velocity along the path at time step k𝑘kitalic_k and T𝑇Titalic_T is the sampling time.
Given (3), one can obtain a discrete time model with sampling time T=2.5⁢ms𝑇2.5msT=2.5\mathrm{ms}italic_T = 2.5 roman_ms as
Following (4), (5), (6) and (7), we obtain a linear time varying system of the form 𝐳k+1=Ak⁢𝐳k+Bk⁢𝐮k+𝐝ksubscript𝐳𝑘1subscriptA𝑘subscript𝐳𝑘subscriptB𝑘subscript𝐮𝑘subscript𝐝𝑘\mathbf{z}_{k+1}=\mathrm{A}_{k}\mathbf{z}_{k}+\mathrm{B}_{k}\mathbf{u}_{k}+%
C
This indicates that as the compression accuracy becomes smaller, its impact exhibits “marginal effects”.
In other words, when the compression errors are not the bottleneck for the convergence, sacrificing the communication costs for faster convergence will reduce the communication efficiency.
In decentralized optimization, efficient communication is critical for enhancing algorithm performance and system scalability. One major approach to reduce communication costs is considering communication compression, which is essential especially under limited communication bandwidth.
When b=6𝑏6b=6italic_b = 6 or k=20𝑘20k=20italic_k = 20, the trajectories of CPP are very close to that of exact Push-Pull/𝒜⁢ℬ𝒜ℬ\mathcal{A}\mathcal{B}caligraphic_A caligraphic_B, which indicates that when the compression errors are small, they are no longer the bottleneck of convergence.
The existence of compression errors may result in inferior convergence performance compared to uncompressed or centralized algorithms. For example, the methods considered by [41, 42, 43, 44, 45, 46] can only guarantee to reach a neighborhood of the desired solutions when the compression errors exist.
A
Moreover, a smaller batch size degrades overall performance, including downstream classification accuracy.
In our experiments, we will use the same pre-trained model parameters to initialise the models for different downstream tasks. During fine-tuning, we fine-tune the parameters of all the layers, including the self-attention and token embedding layers.
(b), (c) the fine-tuning procedure for note-level and sequence-level classification. Apart from the last few output layers, both pre-training and fine-tuning use the same architecture.
To train Transformers, it is required that all input sequences have the same length. For both REMI and CP, we divide the token sequence for each entire piece into a number of shorter sequences with equal sequence length 512, zero-padding those at the end of a piece to 512 with an appropriate number of Pad tokens.
For fine-tuning, we create training, validation and test splits for each of the three datasets of the downstream tasks with the 8:1:1 ratio at the piece level (i.e., all the 512-token sequences from the same piece are in the same split).
D
A. Graves, S. Fernández, F. Gomez, and J. Schmidhuber, “Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks,” in Proc. 23rd Int. Conf. Mach. Learning (ICML), Pittsburgh, USA, Jun. 2006, pp. 369–376.
H. Sun, X. Chen, Q. Shi, M. Hong, X. Fu, and N. D. Sidiropoulos, “Learning to optimize: Training deep neural networks for interference management,” IEEE Trans. Signal Process., vol. 66, no. 20, pp. 5438–5453, Oct. 2018.
H. Sun, X. Chen, Q. Shi, M. Hong, X. Fu, and N. D. Sidiropoulos, “Learning to optimize: Training deep neural networks for interference management,” IEEE Trans. Signal Process., vol. 66, no. 20, pp. 5438–5453, Oct. 2018.
M. Schuster and K. Paliwal, “Bidirectional recurrent neural networks,” IEEE Trans. Signal Process., vol. 45, no. 11, pp. 2673–2681, Nov. 1997.
M. Schuster and K. Paliwal, “Bidirectional recurrent neural networks,” IEEE Trans. Signal Process., vol. 45, no. 11, pp. 2673–2681, Nov. 1997.
C
The computational running time was analysed for the for B2, B6 and the more complex InceptionV3 (IV3) model, both fully re-trained (F) and with transfer learning (TL) on the PCAM dataset. The results are shown in Table 2. Note that the time corresponds to the average time observed for one epoch. We can compare the model architecture and the hardware GPUs acceleration effects. As expected, the running time is increasing with the complexity and depth of the model. The IV3-F model takes 4 to 10 times longer to train than the simple 2 convolutional layers B2 model, depending on the GPU card utilised. The B6 CNN model is taking 1.7 to 2 times longer than the B2 model to train. With the InceptionV3 model, using transfer learning is obviously saving a lot of training time, as a full model training is taking ∼similar-to\sim∼3 times longer to train on all GPU models. In fact, even though the IVF-TL model (transfer learning) is much more complex, the running time is comparable to the B2 and B6 models. Regarding the different GPU cards tested here, more recent and powerful GPU cards decrease the computing time quite drastically, with an acceleration factor between 5 and 12 times for the most recent architecture tested here (A100) on all the CNN models compared to the oldest model tested here (K80). It is worth noting that the deepest model tested here can be fully trained in about one hour with a V100 or A100 GPU card.
Figure 4: Boxplots showing the AUC score for different CNN models for fully re-trained models (F) or with transfer learning (TL).
Precise staging by expert pathologists of breast cancer axillary nodes, a tissue commonly used for the detection of early signs of tumor spreading, is an essential task that will determine the patient’s treatment and his chances of recovery. However, it is a difficult task that was shown to be prone to misclassification. Algorithms, and in particular deep learning based convolutional neural networks, can help the experts in this task by analyzing fully digitized slides of microscopic stained tissue sections. In this study, I evaluated twelve different CNN architectures and different hardware acceleration devices for breast cancer classification on two different public datasets consisting of hundreds of thousands of images. The performance of hardware acceleration devices can improve the training time by a factor of five to twelve, depending on the model used. On the other hand, increasing the convolutional depth increases the training time by a factor of four to six, depending on the acceleration device used. More complex models tend to perform better than very simple ones, especially when fully retrained on the digital pathology dataset, but the relationship between model complexity and performance is not straightforward. Transfer learning from imagenet always performs worse than fully retraining the models. Fine-tuning the hyperparameters of the model improves the results, with the best model tested in this study showing very high performance, comparable to current state–of–the–art models.
Table 2: Run time in seconds for one epoch on different GPU architectures. NbCU: number of CUDA cores. Pp: processing power in GFlops. TL: transfer learning. F: full retraining.
The computational running time was analysed for the for B2, B6 and the more complex InceptionV3 (IV3) model, both fully re-trained (F) and with transfer learning (TL) on the PCAM dataset. The results are shown in Table 2. Note that the time corresponds to the average time observed for one epoch. We can compare the model architecture and the hardware GPUs acceleration effects. As expected, the running time is increasing with the complexity and depth of the model. The IV3-F model takes 4 to 10 times longer to train than the simple 2 convolutional layers B2 model, depending on the GPU card utilised. The B6 CNN model is taking 1.7 to 2 times longer than the B2 model to train. With the InceptionV3 model, using transfer learning is obviously saving a lot of training time, as a full model training is taking ∼similar-to\sim∼3 times longer to train on all GPU models. In fact, even though the IVF-TL model (transfer learning) is much more complex, the running time is comparable to the B2 and B6 models. Regarding the different GPU cards tested here, more recent and powerful GPU cards decrease the computing time quite drastically, with an acceleration factor between 5 and 12 times for the most recent architecture tested here (A100) on all the CNN models compared to the oldest model tested here (K80). It is worth noting that the deepest model tested here can be fully trained in about one hour with a V100 or A100 GPU card.
C
Then, the optimal complex wavefront modulation for the neural étendue expander would be the inverse Fourier transform of the target scene, and, as such, we do not require any additional modulation on the SLM. The SLM therefore can be set to zero-phase modulation.
To assess whether the optimized neural étendue expander ℰℰ\mathcal{E}caligraphic_E, shown in Fig. 1b, has learned the image statistics of the training set we evaluate the virtual frequency modulation ℰ~~ℰ\widetilde{\mathcal{E}}over~ start_ARG caligraphic_E end_ARG, defined as the spectrum of the generated image with the neural étendue expander and the zero-phase SLM modulation as
To further understand this property of a neural étendue expander, we consider the reconstruction loss ℒTsubscriptℒ𝑇\mathcal{L}_{T}caligraphic_L start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT for a specific target image T𝑇Titalic_T.
If we generalize this single-image case to diverse natural images, the neural étendue expander is expected to preserve the common frequency statistics of natural images, while the SLM fills in the image-specific residual frequencies to generate a specific target image.
Therefore, obtaining the optimal neural étendue expander, which minimizes the reconstruction loss ℒTsubscriptℒ𝑇\mathcal{L}_{T}caligraphic_L start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT, results in the virtual frequency modulation ℰ~~ℰ\widetilde{\mathcal{E}}over~ start_ARG caligraphic_E end_ARG that resembles the natural-image spectrum ℱ⁢(T)ℱ𝑇\mathcal{F}(T)caligraphic_F ( italic_T ) averaged over diverse natural images. Also, the retinal frequency filter ℱ⁢(f)ℱ𝑓\mathcal{F}(f)caligraphic_F ( italic_f ) leaves the higher spectral bands outside of the human retinal resolution unconstrained. This allows the neural étendue expander to push undesirable energy towards higher frequency bands, which then manifests as imperceptible high-frequency noise to human viewers.
C
Medical imaging methods such as Computational Tomography (CT) and Magnetic Resonance Imaging (MRI) are essential to clinical diagnoses and surgery planning. Hence, high-resolution medical images are desirable to provide necessary visual information about the human body. In recent years, many DL-based methods have also been proposed for medical image SR.
et al., 2018) believed that low-resolution images in the real world constitute a specific distribution in high-dimensional space, and use a generative adversarial network to generate low-resolution images consistent with this distribution from high-resolution images. After that, Yuan et al. (Yuan
In recent years, more and more Transformer-based models have been proposed. For example, Chen et al. proposed the Image Processing Transformer (IPT (Chen et al., 2021)) which was pre-trained on large-scale datasets. In addition, contrastive learning is introduced for different image-processing tasks. Therefore, the pre-trained model can efficiently be employed on the desired task after finetuning. However, IPT (Chen et al., 2021) relies on large-scale datasets and has a large number of parameters (over 115.5M parameters), which greatly limits its application scenarios. To solve this issue, Liang et al. proposed the SwinIR (Liang et al., 2021) for image restoration based on the Swin Transformer (Liu et al., 2021b). Specifically, the Swin Transformer blocks (RSTB) are proposed for feature extraction and DIV2K+Flickr2K is used for training. To improve the lack of direct interaction between different windows in SwinIR. Zamir (Zamir et al., 2022) et al. proposed Restormer to reconstruct high-quality images by embedding CNNs within Transformer and performing local-global learning at multiple scales. Chen et al. proposed CAT (Chen et al., 2022d) to extend the attention region and aggregate features across different windows. Then, to activate more of the pixels that Transformer focuses on, Chen et al. proposed HAT (Chen
et al., 2023c) proposed a Cross-receptive Focused Inference Network (CFIN) that can incorporate contextual modeling to achieve good performance with limited computational resources. Zhu et al. (Zhu et al., 2023) designed an Attention Retractable Frequency Fusion Transformer (ARFFT) to strengthen the representation ability and extend the receptive field to the whole image. Li et al. (Li et al., 2023d) proposed a concise and powerful Pyramid Clustering Transformer Network (PCTN) for lightweight SISR. Chen et al. (Chen
For instance, Chen et al. proposed a Multi-level Densely Connected Super-Resolution Network (mDCSRN (Chen et al., 2018)) with GAN-guided training to generate high-resolution MR images, which can train and infer quickly. In (Wang
D
SHAP visualisations such as that in Fig. 1(c) can be sparse, indicating that only few spectro-temporal bins contribute to the classifier output. A comparison of the time waveform in Fig. 1(a) and the SHAP values in Fig. 1(c) shows that this particular classifier essentially ignores information contained in non-speech regions, focusing instead upon the speech interval between approximately 1 and 2 seconds and, furthermore between frequencies mostly below 1.5 kHz.
It shows the degree to which each spectro-temporal bin contributes to the classifier output. Darker red points indicate the spectro-temporal bins which lend stronger support for the positive class (here bona fide). In contrast, darker blue points indicate greater support for the negative class (here, spoofed speech).
In the remainder of this paper we describe our use of DeepSHAP to help explain the behaviour of spoofing detection systems. We show a number of illustrative examples for which the input utterances, all drawn from the ASVspoof 2019 LA database [13], are chosen specially to demonstrate the potential insights which can be gained. Given the difficulty in visualising true SHAP values, in the following we present average temporal or spectral results. Given our focus on spoofing detection, we present results for both bona fide and spoofed utterances and the temporal or spectral regions which favour either bona fide or spoofed classes. Results hence reflect where, either in time or frequency, the model has learned to focus attention and hence help to explain its behaviour in terms of how the model responds to a particular utterance.
Fig. 2 shows the results of SHAP analysis for the ‘LA_E_1832578’ utterance and the PC-DARTS classifier. The plot shows the time waveform (a) and the temporal variation in SHAP values averaged across the full spectrum (b). This first example shows that the classifier has learned to focus predominantly upon non-speech intervals. The support in speech intervals for either class is comparatively lower. These observations are unexpected; it is assumed a priori that spoofed speech detection systems should operate upon speech. This observation corroborates the findings in [17], and also [19] which shows that reliable bona fide/spoof decisions might even be inferred from the length of the non-speech interval.
A second visualisation focusing on this specific region is displayed in Fig. 1(d). Ignoring for now whether or not the SHAP values are positive or negative, it exhibits a high degree of correlation to the fundamental frequency and harmonics in the spectrogram, indicating the focus of the classifier on these same components. Last, while the presence of dark blue traces in Fig. 1(d) indicate components of the spectrogram which favour the negative class, the overall dominance of red colours (though not all dark red) indicate a greater support for the positive class (the classifier output correctly indicates bona fide speech).
D
CBFs that account for uncertainties in the system dynamics have been considered in two ways. The authors in [10] and [11] consider input-to-state safety to quantify possible safety violation. Conversely, the work in [12] proposes robust CBFs to guarantee robust safety by accounting for all permissible errors within an uncertainty set. Input delays within CBFs were discussed in [13, 14]. CBFs that account for state estimation uncertainties were proposed in [15] and [16]. Relying on the same notion of measurement robust CBFs as in [15], the authors in [17] present empirical evaluations on a segway. While the notion of ROCBFs that we present in this paper is inspired by measurement-robust CBFs as presented in [15], we also consider uncertainties in the system dynamics and focus on learning valid CBFs from expert demonstrations. Similar to the notion of ROCBF, the authors in [18] consider additive disturbances in the system dynamics and state-estimation errors jointly.
Control barrier functions (CBFs) were introduced in [3, 4] to render a safe set controlled forward invariant. A CBF defines a set of safe control inputs that can be used to find a minimally invasive safety-preserving correction to a nominal control law by solving a convex quadratic program. Many variations and extensions of CBFs appeared in the literature, e.g., composition of CBFs [5], CBFs for multi-robot systems [6], CBFs encoding temporal logic constraints [7], and CBFs for systems with higher relative degree [8]. Finally, CBFs and Hamilton-Jacobi were found to share connections  [9].
CBFs that account for uncertainties in the system dynamics have been considered in two ways. The authors in [10] and [11] consider input-to-state safety to quantify possible safety violation. Conversely, the work in [12] proposes robust CBFs to guarantee robust safety by accounting for all permissible errors within an uncertainty set. Input delays within CBFs were discussed in [13, 14]. CBFs that account for state estimation uncertainties were proposed in [15] and [16]. Relying on the same notion of measurement robust CBFs as in [15], the authors in [17] present empirical evaluations on a segway. While the notion of ROCBFs that we present in this paper is inspired by measurement-robust CBFs as presented in [15], we also consider uncertainties in the system dynamics and focus on learning valid CBFs from expert demonstrations. Similar to the notion of ROCBF, the authors in [18] consider additive disturbances in the system dynamics and state-estimation errors jointly.
Learning with CBFs: Approaches that use CBFs during learning typically assume that a valid CBF is already given, while we focus on constructing CBFs so that our approach can be viewed as complementary. In [19], it is shown how safe and optimal reward functions can be obtained, and how these are related to CBFs. The authors in [20] use CBFs to learn a provably correct neural network safety guard for kinematic bicycle models. The authors in [21] consider that uncertainty enters the system dynamics linearly and propose to use robust adaptive CBFs, as originally presented in [22], in conjunction with online set membership identification methods. In [23], it is shown how additive and multiplicative noise can be estimated online using Gaussian process regression for safe CBFs. The authors in [24] collect data to episodically update the system model and the CBF controller. A similar idea is followed in [25] where instead a projection with respect to the CBF condition is episodically learned. Imitation learning under safety constraints imposed by a Lyapunov function was proposed in [26]. Further work in this direction can be found in
A promising research direction is to learn CBFs from data. The authors in [36] construct CBFs from safe and unsafe data using support vector machines, while authors in [37] learn a set of linear CBFs for clustered datasets. The authors in [38] proposed learning limited duration CBFs and the work in [39] learns signed distance fields that define a CBF. In [40], a neural network controller is trained episodically to imitate an already given CBF. The authors in [41] learn parameters associated with the constraints of a CBF to improve feasibility. These works present empirical validations, but no formal correctness guarantees are provided. The authors in [42, 43, 44, 45] propose counter-example guided approaches to learn Lyapunov and barrier functions for known closed-loop systems, while Lyapunov functions for unknown systems are learned in [46]. In [47, 48, 49] control barrier functions are learned and post-hoc verified, e.g., using Lipschitz arguments and satisfiability modulo theory, while [50] uses a counter-example guided approach. As opposed to these works, we make use of safe expert demonstrations. Expert trajectories are utilized in [51] to learn a contraction metric along with a tracking controller, while motion primitives are learned from expert demonstrations in [52]. In our previous work [53], we proposed to learn CBFs for known nonlinear systems from expert demonstrations. We provided the first conditions that ensure correctness of the learned CBF using Lipschitz continuity and covering number arguments. In [54] and [55], we extended this framework to partially unknown hybrid systems. In this paper, we focus on state estimation and provide sophisticated simulations of our method in CARLA.
C
90∘superscript9090^{\circ}90 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT difference in Tx- or Rx-polarization angles, as described
For the low SNR regime such as 5 dB SNR, the theoretically derived optimal Tx-polarization angles themselves have insignificant differences from numerically derived optimal Tx-polarization angles. The simulation results for the low SNR regime are omitted owing to the page limit.
The differences between theoretically and numerically obtained optimal Tx-polarization angles are considerable. This is due to the fact that the approximation (8) is less accurate at higher SNRs.
high SNR regime, utilizing our joint polarization pre-post coding improves PR-MIMO channel capacity with around 5 dB, 4 dB, and 3dB SNR gains in
and receiver and uses random polarization, in the low SNR regime (below 3 dB). The degrees of freedom (slop at high SNR) are the same in all three cases, since they are determined by the number of antenna ports.
A
\end{split}start_ROW start_CELL italic_A start_POSTSUBSCRIPT roman_Σ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_U end_POSTSUPERSCRIPT ( italic_λ italic_R ) end_CELL start_CELL = italic_A start_POSTSUBSCRIPT roman_Σ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_U end_POSTSUPERSCRIPT ( italic_R ) , end_CELL end_ROW start_ROW start_CELL italic_A start_POSTSUBSCRIPT roman_Σ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N italic_U end_POSTSUPERSCRIPT ( italic_λ italic_R ) end_CELL start_CELL = italic_A start_POSTSUBSCRIPT roman_Σ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N italic_U end_POSTSUPERSCRIPT ( italic_R ) . end_CELL end_ROW
We also verify that multiplying a regularizer by a scalar does not change the compliance measure which is consistent with recovery guarantees.
Consider a cone Σ⊂ℋΣℋ\Sigma\subset\mathcal{H}roman_Σ ⊂ caligraphic_H and assume that Σ−ΣΣΣ\Sigma-\Sigmaroman_Σ - roman_Σ is a union of subspaces, (Σ−Σ)∩S⁢(1)ΣΣ𝑆1(\Sigma-\Sigma)\cap S(1)( roman_Σ - roman_Σ ) ∩ italic_S ( 1 ) is compact, and Σ≠span⁢(x)Σspan𝑥\Sigma\neq\mathrm{span}(x)roman_Σ ≠ roman_span ( italic_x ) for each x∈Σ𝑥Σx\in\Sigmaitalic_x ∈ roman_Σ.
First γ⁢z∈𝒯R⁢(F⁢Σ)𝛾𝑧subscript𝒯𝑅𝐹Σ\gamma z\in\mathcal{T}_{R}(F\Sigma)italic_γ italic_z ∈ caligraphic_T start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT ( italic_F roman_Σ ) if, and only if, there exists x∈Σ𝑥Σx\in\Sigmaitalic_x ∈ roman_Σ such that
Let x∈Σ𝑥Σx\in\Sigmaitalic_x ∈ roman_Σ. We remark that, the tangent cone is invariant by scalar multiplication:
D
The above optimization is combinatorial in nature as there are (NM)binomial𝑁𝑀\binom{N}{M}( FRACOP start_ARG italic_N end_ARG start_ARG italic_M end_ARG ) possible combinations, which are nearly impossible to exhaust in practice except for very small M𝑀Mitalic_M. Therefore, we randomly sample a large number (say 10,0001000010,00010 , 000) of combinations and pick the maximizing combination as an approximate solution.
To implement template selection per Eq. (6), the knowledge of landmarks is assumed. However, even such knowledge is nonexistent before template selection. Therefore, we proposed to utilize potential key points to substitute landmarks. In particular, we utilize the classical multi-scale detector, SIFT, to find key points, where landmarks are likely to co-locate.
Figure 5: Similarities of potential key points vs. landmarks. The correlation coefficient (CC) of potential key points and landmarks is 0.462, thus we think it is feasible to replace landmarks with potential key points when estimating similarities.
Q: How good is the use of SIFT key points as substitutes for landmarks? Figure 5 demonstrate the relationship between landmarks and potential key points from handcraft methods in feature level (Eq. (9)).
In this paper, we propose a framework named Sample Choosing Policy (SCP) to find the most annotation-worthy images as templates. First, to handle the situation of no landmark label, we choose handcrafted key points as substitutes for landmarks of interest. Second, to replace the MRE, we proposed to use a similarity score between a template and the rest based on the features of such potential key points.
A
This may be because task 3 was the only task where registration was performed between two follow-up time points.
The presence of similar deformations and structures in these scans likely rendered the registration between these two time points comparatively easier than the other three tasks.
Following close coordination with the clinical experts of the organizing committee (H.A., M.B., B.W., J.S., E.C., J.R., S.A., M.M.), the time-window between the two paired scans of each patient was decided to be selected such that i) the scans of the two time-points had sufficient apparent tissue deformations, and ii) confounding effects of surgically induced contrast enhancement (Albert et al., 1994; Wen et al., 2010) were avoided.
The presence of similar deformations and structures in these scans likely rendered the registration between these two time points comparatively easier than the other three tasks.
The presence of similar deformations and structures in these scans likely rendered the registration between these two time points comparatively easier than the other three tasks.
A
θ∈[θ¯,θ⋆)𝜃¯𝜃superscript𝜃⋆\theta\in[\bar{\theta},\theta^{\star})italic_θ ∈ [ over¯ start_ARG italic_θ end_ARG , italic_θ start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT ).
If there does not exist a neighborhood of θ⋆superscript𝜃⋆\theta^{\star}italic_θ start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT in
there exists a neighborhood of θ⋆superscript𝜃⋆\theta^{\star}italic_θ start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT in which
the ϵ−limit-fromitalic-ϵ\epsilon-italic_ϵ -neighborhood of θ⋆superscript𝜃⋆\theta^{\star}italic_θ start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT for some
of convergence of θ⋆superscript𝜃⋆\theta^{\star}italic_θ start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT.
A
Control of PDE systems has been widely explored over the years [15, 16, 17, 18]. Similar to ODEs, notions of ISSt for PDE systems have garnered a lot of attention recently (see survey paper [19]). For example, PDE ISSt have been explored for reaction-diffusion systems [20], hyperbolic systems [21], [22], parabolic systems [23], parabolic PDE systems with boundary disturbances [24], [25], systems with distributed time-delays [26], and diffusion equation with time-varying distributed coefficients [27]. Notions of practical ISSt for PDEs have been explored in [28]. In contrast to ISSt, ISSf has remained mostly unexplored in the context of PDEs. In [29], safety verification using barrier functionals for homogeneous distributed parameter systems has been considered. In this work, numerical strategies based on semi-definite programming has been used for the construction of barrier functionals. However, control performance under disturbances has not been considered in this work. Given the importance of maintaining system safety under disturbances, it is critical to consider control system design for PDE systems under these disturbances. In [30], safe control of Stefan system under disturbances is considered. In the framework proposed in [30], an operator is allowed to manipulate the control input as long as safety constraints are satisfied; however, the safety control overrides the operator control signal realizing a feedback control ultimately guaranteeing safety. The feedback law for safety control is designed utilizing backstepping, quadratic programming, and a control barrier function. In our current work, we attempt an alternate approach to achieve safety control of a class of linear parabolic PDEs under disturbances. Specifically, we design a control law that employs feedback from the boundaries and an in-domain point, by utilizing a practical ISSf (pISSf) barrier functional characterization (inspired by the notion presented in [4]). Subsequently, utilizing ISSt Lyapunov functional characterization, we prove that such designed safety control is also an input-to-state stabilizing control under certain additional conditions. In this way, we ultimately propose a feedback control law that satisfies the conditions of both ISSt and pISSf.
In this paper, we have explored safe control of a class of linear Parabolic PDEs under disturbances. First, we defined unsafe sets and distance of the system states from such unsafe sets. Next, we constructed both control barrier and Lyapunov functional in order to develop a design framework for the controller under specific safety and stability guarantees. Additionally, we have applied our proposed strategy in the context of battery management system using boundary coolant control. We present the efficacy of our proposed methodologies through simulation studies under nominal conditions and disturbed conditions. The simulation study shows that the proposed approach can be beneficial to maintain safety limits. As a future work, we plan to extend the framework to (i) n𝑛nitalic_n-dimensional PDEs and apply it towards thermal management of large-scale battery packs, and (ii) PDEs with saturation on input magnitude and rates.
In the subsequent sections, our approach of finding the control gains are as follows. First, in Section 3, we find the conditions on control gains that satisfy the pISSf criterion in (9). Next, in Section 4, we show that the pISSf conditions on control gains additionally guarantee ISSt for the system in the sense of (10).
In this section, we have derived the conditions on control gains for which the system is pISSf. In the following section, we will show that the derived conditions for pISSf ensures ISSt for the system.
In light of the aforementioned discussion, the main contributions of this paper is the following: Building upon the existing literature, we extend PDE safety research by designing a feedback based control that satisfies both pISSf and ISSt under disturbances, utilizing pISSf barrier functional characterization and ISSt Lyapunov characterization. As a case study, we consider a one-dimensional thermal PDE model for a battery module with a boundary coolant control. Next, we construct a control barrier functional and control Lyapunov functional for obtaining analytical guarantees for safety and stability for the battery system. The analytical guarantees allows us design the controller gains for actuating the boundary coolant. The rest of the paper is organized as follows. Section 2 sets up the problem by discussing the battery module thermal model and formulating control objectives. Sections 3 and 4 detail the pISSf-ISSt framework. Section 4 presents case studies to illustrate the proposed framework. Finally, Section 5 concludes the paper.
D
In this section, we implement and evaluate a complete testbed system for our spectrum allocation system. We use the testbed to collect training samples, which are then used
Allocation based on SSs parameters is implicitly based on real-time channel conditions, which is important for accurate and optimized spectrum allocation as the conditions affecting signal attenuation (e.g., air, rain, vehicular traffic) may change over time.
The inference time complexity of all our ML approaches is linear in the size of the input, and thus, the inference time in practice is minimal (a fraction of a second). The training time complexity of most ML models depends on the training samples and the resulting convergence, and is thus, uncertain. The actual training times incurred from our set of
Overall, we implemented a Python repository running on Linux that transmits and receives signals and measures and collects relevant parameters in real-time at
The general spectrum allocation problem is to allocate optimal power to an SU’s request across spatial, frequency, and temporal domains. We focus on the core function approximation problem, which is to determine the optimal power allocation to an SU for a given location, channel, and time instant—since frequency and temporal domains are essentially “orthogonal” dimensions of the problem and thus can be easily handled independently (as done in §III-F). We thus assume a single channel and instant for now, and discuss multiple channels and request duration in §III-F.
C
The following result states that, under Assumption 1, if the stepsize at each iteration is chosen by the doubling trick scheme, there is an upper bound for the static regret defined in (4). Moreover, the upper bound has the order of O⁢(T)𝑂𝑇O(\sqrt{T})italic_O ( square-root start_ARG italic_T end_ARG ) for convex costs.
Suppose Assumption 1 holds. Furthermore, if the stepsize is chosen as αt=CTTsubscript𝛼𝑡subscript𝐶𝑇𝑇\alpha_{t}=\sqrt{\frac{C_{T}}{T}}italic_α start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = square-root start_ARG divide start_ARG italic_C start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT end_ARG start_ARG italic_T end_ARG end_ARG, the dynamic regret (5) achieved by Algorithm 1 satisfies
Suppose Assumptions 1 (i) and 2 hold. Furthermore, if the stepsize is chosen as αt=Pμ⁢tsubscript𝛼𝑡𝑃𝜇𝑡\alpha_{t}=\frac{P}{\mu t}italic_α start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = divide start_ARG italic_P end_ARG start_ARG italic_μ italic_t end_ARG. Then, the static regret (4) achieved by Algorithm 1 satisfies
Suppose Assumption 1 holds. Furthermore, if the stepsize is chosen as αt=CTTsubscript𝛼𝑡subscript𝐶𝑇𝑇\alpha_{t}=\sqrt{\frac{C_{T}}{T}}italic_α start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = square-root start_ARG divide start_ARG italic_C start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT end_ARG start_ARG italic_T end_ARG end_ARG, the dynamic regret achieved by the online gradient descent algorithm (32) satisfies
Suppose Assumption 1 holds. Furthermore, if the stepsize is chosen according to Definition 1. Then, the static regret (4) achieved by Algorithm 1 satisfies
D
In 2015, Bar et al. (2015) used a pre-trained image classifier for classifying pathologies in chest radiographs, demonstrating the feasibility of detecting X-ray pathologyDonahue et al. (2014). In 2017, Cicero et al. (2017) presented a similar CNN classifier that achieved an AUC of 0.964 using a medium-sized dataset of 35,000 X-rays annotated by 2443 radiologists. The authors achieved an overall sensitivity and specificity of 91% using GoogleNet Szegedy et al. (2015). Maduskar et al. (2013) evaluated the performance of CNNs in tuberculosis detection using a small dataset of 1007 chest X-rays. They experimented with pretrained and untrained versions of two architectures, AlexNet Krizhevsky et al. (2012) and GoogleNet Szegedy et al. (2015), and obtained the best performance with an ensemble of both architectures in the pretrained condition (AUC = 0.99). The pretrained models consistently outperformed the untrained models. Similarly, Lakhani and Sundaram (2017) compared the performance of a computer-aided tuberculosis diagnosis system (CAD4TB) with that of health professionals and found that the tuberculosis assessment of CAD4TB was comparable to that of health officers. In 2016, Wang et al. (2017a) proposed weakly controlled multi-label classification and localization of thoracic diseases using deep learning. In 2017, Rajpurkar et al. (2017) designed a deep learning model called CheXNet, which utilized a 121-layer CNN with dense connections and batch normalization to detect pneumonia. The model was trained on a publicly available dataset of 100,000 chest X-ray images and outperformed the average radiologist performance. Bar et al. (2018) used a pretrained model on a non-medical dataset and fine-tuned it on pathology features for disease identification. Dasanayaka and Dissanayake (2021) presented deep learning-based segmentation techniques to detect pulmonary tuberculosis. Patel and Kashyap (2023) utilized the Littlewood-Paley Empirical Wavelet Transform (LPEWT) to decompose lung images into sub-bands and extract robust features for lung disease detection. Deep learning has also been extensively applied in the detection of COVID-19Bhuyan et al. (2022); Farooq and Hafeez (2020); Yang et al. (2020); Li et al. (2020); Pushparaj et al. (2022); Irene D and Beulah (2022); Dhruv et al. (2023).
Limitations: Most disease prediction models focus on single-label classification, where the model only detects the presence of a single pathology. However, multi-label disease classification can offer several advantages over single-label classification. Multi-label diagnosis is akin to realistic representation since, in clinical practice, it’s common for patients to have multiple medical conditions. Multi-label classification allows a single instance (e.g., an x-ray image) to be associated with multiple disease labels. This provides a more comprehensive view of the patient’s health, as many patients may suffer from multiple medical conditions simultaneously. Single-label classification may force a medical professional to decide which disease is the “primary” one when a patient has multiple conditions. This can lead to information loss, as secondary conditions may be overlooked. A multi-label classification doesn’t require this decision and captures all relevant conditions.
In Table 3 and Table 4, we compare the performance of our proposed model against single and multi-label prediction models for selected pathologies. Table 3 shows that our proposed multi-label approach was able to outperform single-label models. In Table 4, the results indicate that our proposed architecture outperforms Wang et al. Wang et al. (2017b) and Irvin et al. Irvin et al. (2019) in multiple detection whereas betters performance of CheXNext Rajpurkar et al. (2018), which is the state-of-the-art chest x-ray disease prediction model, for cardiomegaly condition only.
Given a medical image of a patient as input, a disease prediction system provides the probability of the occurrence of a disease. This approach represents a single-label classification problem. Examples of such diagnoses include diabetic retinopathy in eye fundus images, skin cancer in skin lesion images, and pneumonia in chest X-rays (Figure 3, Figure 3, and Figure 3). However, in certain cases, multi-label prediction becomes crucial as it provides the probabilities of multiple pathologies occurring within the same medical image. This is particularly important when there are possibilities of more than one disease being present.
Most existing studies on disease diagnosis using chest X-rays primarily focus on detecting a single pathology, such as pneumonia or COVID-19 (Bar et al. (2015); Cicero et al. (2017); Rajpurkar et al. (2017); Dasanayaka and Dissanayake (2021); Hussain et al. (2023)). However, an X-ray image can exhibit multiple pathological conditions simultaneously. Detecting multiple pathologies can provide a comprehensive view of the patient’s health from a single image. Single-label classifications may produce false negatives when patients have multiple diseases, as they focus solely on the primary condition. Multi-label classification can help reduce false negatives by identifying secondary or co-occurring diseases. Multi-label classification can also be valuable in epidemiological studies and public health research. It can provide insights into the prevalence and co-occurrence of diseases in specific populations, aiding in resource allocation and healthcare planning. In this research, we employ a 121-layer DenseNet architecture to perform diagnostic predictions for 14 distinct pathological conditions in chest X-rays. Additionally, we utilize the GRADCAM explanation method to localize specific areas within the chest radiograph to visualize the regions to which the model paid attention to make disease predictions, enhancing our understanding of the model’s predictions. The detection of these 14 different pathology conditions, including ‘Atelectasis’, ‘Cardiomegaly’, ‘Consolidation’, ‘Edema’, ‘Emphysema’, ‘Effusion’, ‘Fibrosis’, ‘Hernia’, ‘Infiltration’, ‘Mass’, ‘Nodule’, ‘Pneumothorax’, ‘Pleural Thickening’, and ‘Pneumonia’, presents a multi-label classification problem. The input to the DenseNet architecture is a chest X-ray image; the output is a label that provides the probability of each pathology being present in the X-ray. The code for our approach is available on Github111https://github.com/dipkamal/chestxrayclassifier.
A
A discrete emotion out of a total of 12121212 (joy, sadness, surprise, contempt, hope, fear, attraction, disgust, tenderness, anger, calm, and tedium) [21].
Physiological signals [17]: BVP, GSR, and SKT physiological signals captured during the experimentation by the BioSignalPlux research toolkit are provided in a binary MATLAB® file (.mat). It contains a cell array with 100100100100 rows (one per volunteer) and 14141414 columns (one per video). Each cell contains four fields: volunteer identifier, clip or trial identifier, filtering indicator, and an inner cell array (with the physiological data associated with that specific clip and volunteer).
The signals being released are the ones acquired by the BioSignalPlux research toolkit. Specifically, the raw and filtered BVP, GSR, and SKT signals captured during every video visualization are provided. The preprocessing is as follows:
Additionally, two in-house sensory systems are employed. On the one hand, the Bindi’s bracelet [28] measures dorsal wrist BVP, ventral wrist GSR, and forearm SKT. The hardware and software particularities of this device are detailed in [29, 30, 31]. The previously mentioned BioSignalPlux toolkit is employed as a golden standard to analyze the performance of its sensors due to its experimental nature. BVP and GSR signals from BioSignalPlux and Bindi were successfully compared and correlated with Bindi in [30] and [31]. On the other hand, a GSR sensor to be integrated into the next version of the Bindi bracelet is used. Its hardware and software particularities are detailed in [32].
The BioSignalPlux333https://biosignalsplux.com/products/kits/researcher.html research toolkit system. It is a commonly used device to acquire different physiological signals in the literature [23, 24, 25, 26]. More specifically, we capture finger Blood Volume Pulse (BVP), ventral wrist Galvanic Skin Response (GSR), forearm Skin Temperature (SKT), trapezoidal Electromyography (EMG), chest respiration (RESP), and inertial wrist movement through an accelerometer.
D
We have made available an online system with this trained network so that anyone can use it and test it, simply by uploading images. The software automatically labels the images as positive or negative to AMD. We have also provided the source code of the entire software and it is available publicly to facilitate researchers to use this as it is, or improve it. We are focused on fostering partnerships to facilitate and conduct research towards the usage of deep-learning to generate and recognize medical images.
Figure 5 provides examples of real and synthetic images that are from eyes, positive and negative to AMD. One can observe the high-quality images that were generated for both, AMD and non-AMD images.
We have made the source code for generating the synthetic images publicly available to facilitate joint research in the field. We have also provided free access through this paper for the online use of the AMD detection model. This will facilitate future work to broaden the scope for detecting the severity of AMD, and for differentiating from other diseases. For generating synthetic medical images, there is the need to consider a broader range of deep architectures and the effectiveness of heatmaps helping the clinicians.
Evaluating the quality of synthetic images is important for establishing their usability in practical applications, such as training deep learning models. It can significantly influence the training of these models. If the data does not accurately represent reality or lacks diversity, the synthetic data may introduce noise into the training, decreasing the model performance.
The potential of diffusion models [63], known for their advanced capabilities in generating high-quality and diverse images, presents exciting future research in AMD and other ophthalmology diagnoses. These models should be considered for future development.
D
Our approach parallels the development in 17, where we addressed the approximation of model predictive control policies for deterministic systems. We ask whether the training of a ReLU-based neural network to approximate a controller Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) has been sufficient to ensure that the network’s output function ΦNN⁢(⋅)subscriptΦNN⋅\Phi_{\textrm{NN}}(\cdot)roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( ⋅ ) will be stabilizing for (1). Our approach is based on the offline characterization of the error function e⁢(x)≔ΦNN⁢(x)−Φ⁢(x)≔𝑒𝑥subscriptΦNN𝑥Φ𝑥e(x)\coloneqq\Phi_{\textrm{NN}}(x)-\Phi(x)italic_e ( italic_x ) ≔ roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( italic_x ) - roman_Φ ( italic_x ) using mixed-integer (MI) optimization, where Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) is a continuous PWA law defined using any of (3), (4) or (5) (as we show in §4).
Our approach parallels the development in 17, where we addressed the approximation of model predictive control policies for deterministic systems. We ask whether the training of a ReLU-based neural network to approximate a controller Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) has been sufficient to ensure that the network’s output function ΦNN⁢(⋅)subscriptΦNN⋅\Phi_{\textrm{NN}}(\cdot)roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( ⋅ ) will be stabilizing for (1). Our approach is based on the offline characterization of the error function e⁢(x)≔ΦNN⁢(x)−Φ⁢(x)≔𝑒𝑥subscriptΦNN𝑥Φ𝑥e(x)\coloneqq\Phi_{\textrm{NN}}(x)-\Phi(x)italic_e ( italic_x ) ≔ roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( italic_x ) - roman_Φ ( italic_x ) using mixed-integer (MI) optimization, where Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) is a continuous PWA law defined using any of (3), (4) or (5) (as we show in §4).
The first quantity is precisely of the type required to apply the stability result of §3.2, thus supplying a condition on the optimal value of an MILP sufficient to certify the uniform ultimate boundedness of the closed-loop system (1) under the action of ΦNN⁢(⋅)subscriptΦNN⋅\Phi_{\textrm{NN}}(\cdot)roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( ⋅ ), obtained by suitably training a ReLU network to replicate Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ).
By analyzing the results in Tab. 3 – specifically, by contrasting the third and fourth column – we notice that we have always succeeded in the design of a minimum complexity, stabilizing ReLU-based surrogate ΦNN⁢(⋅)subscriptΦNN⋅\Phi_{\textrm{NN}}(\cdot)roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( ⋅ ) of Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) in (10) for all the considered cases, i.e., Ex. (a)–(j). In particular, the resulting values for e¯∞subscript¯𝑒\bar{e}_{\infty}over¯ start_ARG italic_e end_ARG start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT suggest that the neighbourhood of the origin we are assured to reach with ΦNN⁢(⋅)subscriptΦNN⋅\Phi_{\textrm{NN}}(\cdot)roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( ⋅ ) can be made very small in practise, certifying to ultimately bounding the system state in a set up to 99.02%percent99.0299.02\%99.02 % smaller than the original volume of the control invariant set 𝒮𝒮\mathcal{S}caligraphic_S (values for b𝑏bitalic_b, last column). Note that the obtained results can, in principle, be further improved by adding extra layers or neurons in the architecture underlying ΦNN⁢(⋅)subscriptΦNN⋅\Phi_{\textrm{NN}}(\cdot)roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( ⋅ ) – this may come at the price of slightly increasing both the training time and the time required for computing e¯∞subscript¯𝑒\bar{e}_{\infty}over¯ start_ARG italic_e end_ARG start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT.
We will obtain a condition on the optimal value of \pglsMILP sufficient to assure that the closed-loop system (1) under the action of ΦNN⁢(⋅)subscriptΦNN⋅\Phi_{\textrm{NN}}(\cdot)roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( ⋅ ) is (uniformly) ultimately bounded within a set of adjustable size and (exponential) convergence rate, according to the following notion:
D
Specifically, the E𝐸Eitalic_E-verifier can be used to obtain, with polynomial complexity, one necessary and one sufficient condition for C𝐶Citalic_C-enforceability; in case that the sufficient condition is satisfied, the trimmed version of the E𝐸Eitalic_E-verifier leads to a strategy to enforce concealability, also with polynomial complexity.
These developments should be contrasted against constructions with exponential complexity [12] (the latter, however, provide a necessary and sufficient condition).
Specifically, the E𝐸Eitalic_E-verifier can be used to obtain, with polynomial complexity, one necessary and one sufficient condition for C𝐶Citalic_C-enforceability; in case that the sufficient condition is satisfied, the trimmed version of the E𝐸Eitalic_E-verifier leads to a strategy to enforce concealability, also with polynomial complexity.
It is worth mentioning that the focus of this paper is on the use of reduced complexity constructions (with polynomial complexity) to provide one necessary condition and one sufficient condition for C𝐶Citalic_C-enforceability.
Taking advantage of the special structure of the concealability problem, we propose a verifier-like structure of polynomial complexity to obtain one necessary condition and one sufficient condition for enforceability of the defensive function with polynomial complexity.
A
In this section we review typical loss functions used in image registration, and analyze the related requirements for privacy-preserving optimization.
Since the registration gradient is generally driven mainly by a fraction of the image content, such as the image boundaries in the case of SSD cost, a reasonable approximation of Equations (4) and (6) can be obtained by evaluating the cost only on relevant image locations.
The loss f𝑓fitalic_f can be any similarity measure, e.g., the Sum of Squared Differences (SSD), the negative Mutual Information (MI), or normalized cross correlation (CC).
A typical loss function to be optimized during the registration process is the sum of squared intensity differences (SSD) evaluated on the set of image coordinates:
Thanks to the privacy and security guarantees of these cryptographic tools, during the entire registration procedure, the content of the image data S𝑆Sitalic_S and J𝐽Jitalic_J is never disclosed to the opposite party.
C
1})over^ start_ARG italic_b end_ARG start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_τ start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) ← over^ start_ARG blackboard_P end_ARG start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT ( italic_τ start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ).
12:     Update the confidence set 𝒞tsuperscript𝒞𝑡\mathcal{C}^{t}caligraphic_C start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT by (4.4).
To conduct optimistic planning, we seek for the policy that maximizes the return among all parameters θ∈𝒞t𝜃superscript𝒞𝑡\theta\in\mathcal{C}^{t}italic_θ ∈ caligraphic_C start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT and the corresponding features. The update of policy takes the following form,
\in\mathcal{C}^{t}}V^{\pi}(\theta),italic_π start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT ← roman_argmax start_POSTSUBSCRIPT italic_π ∈ roman_Π end_POSTSUBSCRIPT roman_max start_POSTSUBSCRIPT italic_θ ∈ caligraphic_C start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT end_POSTSUBSCRIPT italic_V start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_θ ) ,
}^{t}}V^{\pi}(\theta)italic_π start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT ← roman_argmax start_POSTSUBSCRIPT italic_π ∈ roman_Π end_POSTSUBSCRIPT roman_max start_POSTSUBSCRIPT italic_θ ∈ caligraphic_C start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT end_POSTSUBSCRIPT italic_V start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_θ ).
A
𝒪Y1,B′=0subscript𝒪subscript𝑌1superscript𝐵′0\mathcal{O}_{Y_{1},B^{\prime}}=0caligraphic_O start_POSTSUBSCRIPT italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_B start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT = 0.
to be a variant that returns the set of columns Y1subscript𝑌1Y_{1}italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and the set of
Y1subscript𝑌1Y_{1}italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, with the notations of the above lemma.
R[K]subscript𝑅delimited-[]KR_{\rm[K]}italic_R start_POSTSUBSCRIPT [ roman_K ] end_POSTSUBSCRIPT, with the notations of lem. 61.
notations and hypotheses as in lemma 53, with A:=AΣassign𝐴subscript𝐴ΣA:=A_{\Sigma}italic_A := italic_A start_POSTSUBSCRIPT roman_Σ end_POSTSUBSCRIPT,
B
\mathcal{H}^{T}\mathcal{H})=\frac{1}{2k+2}italic_λ start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT ( italic_b ( italic_k ) over^ start_ARG caligraphic_L end_ARG start_POSTSUBSCRIPT caligraphic_G end_POSTSUBSCRIPT + italic_a ( italic_k ) caligraphic_H start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT caligraphic_H ) = divide start_ARG 1 end_ARG start_ARG italic_k + 1 end_ARG italic_λ start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT ( over^ start_ARG caligraphic_L end_ARG start_POSTSUBSCRIPT caligraphic_G end_POSTSUBSCRIPT + caligraphic_H start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT caligraphic_H ) = divide start_ARG 1 end_ARG start_ARG 2 italic_k + 2 end_ARG. Then, the condition (i) holds with h=1ℎ1h=1italic_h = 1 and ∑k=0∞Λkh=∑k=0∞12⁢k+2=∞superscriptsubscript𝑘0superscriptsubscriptΛ𝑘ℎsuperscriptsubscript𝑘012𝑘2\sum_{k=0}^{\infty}\Lambda_{k}^{h}=\sum_{k=0}^{\infty}\frac{1}{2k+2}=\infty∑ start_POSTSUBSCRIPT italic_k = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT roman_Λ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT = ∑ start_POSTSUBSCRIPT italic_k = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT divide start_ARG 1 end_ARG start_ARG 2 italic_k + 2 end_ARG = ∞.
For the special case without regularization, we directly obtain the following corollary by Theorem 1.
The convergence and performance analysis of the algorithm (6) are presented in this section. First, Lemma 1 gives a nonnegative supermartingale type inequality of the squared estimation error. Based on which, Theorem 1 proves the almost sure convergence of the algorithm. Then, Theorem 2 gives intuitive convergence conditions for the case with balanced conditional digraphs by Lemma 2. Whereafter, Corollary 2 gives more intuitive convergence conditions for the case with Markovian switching graphs and regression matrices. Finally, Theorem 3 establishes an upper bound for the regret of the algorithm by Lemma 3, and Theorem 4 gives a non-asymptotic rate for the algorithm. The proofs of theorems, Proposition 1 and Corollary 2 are in Appendix A, and those of the lemmas in this section are in Appendix B.
Then, we give intuitive convergence conditions for the case with balanced conditional digraphs. We first introduce the following definitions.
Whereafter, we give more intuitive convergence conditions for the case with Markovian switching graphs and regression matrices. We first make the following assumption.
A
Graph signal variations can also be computed in ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-norm as graph total variation (GTV) [10, 11].
Graph signal variations can also be computed in ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-norm as graph total variation (GTV) [10, 11].
Though convex, minimization of ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-norm like GTV requires iterative algorithms like proximal gradient (PG) [24] that are often computation-expensive.
Its generalization, total generalized variation (TGV) [17, 18], better handles the known staircase effect, but retains the non-differentiable ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-norm that requires iterative optimization.
Total variation (TV) [16] was a popular image prior due to available algorithms in minimizing convex but non-differentiable ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-norm.
B
In summary, our simulation study showed that DL-based methods can be used for MR image re-parameterization. Based on our preliminary results, we suggest that DL-based methods hold the potential to generate via simulations MR imaging scans with a new set of parameters.
In summary, our simulation study showed that DL-based methods can be used for MR image re-parameterization. Based on our preliminary results, we suggest that DL-based methods hold the potential to generate via simulations MR imaging scans with a new set of parameters.
Future work can focus on varying larger number of acquisition parameters. This approach could also be utilized for T1/T2 mapping, based on the availability of sufficient training data.
Brainweb is a simulated brain database that contains a set of realistic MRI data volumes produced by an MRI Simulator. We used this tool to generate test scans in 5 different parameter settings. The results can be seen in Figures 6 and 6 for both models. The evaluation metrics on this test-set can be found in Table 2.
In our work, we propose a coarse-to-fine fully convolutional network for MR image re-parameterization mainly for Repetition Time (TR) and Echo Time (TE) parameters. As the model is coarse-to-fine, we use image features extracted from an image reconstruction auto-encoder as input instead of directly using the raw image. This technique makes the proposed model more robust to a potential overfitting. Based on our preliminary experiments, DL-based methods hold the potential to simulate MRI scans with a new set of parameters. Our deep learning model also performs the task considerably faster than simple biophysical models. To generate our data, we rely on MRiLab [7] which is a conventional MR image simulator. Source code is publicly available at https://github.com/Abhijeet8901/Deep-Learning-Based-MR-Image-Re-parameterization.
B
1) To the best of our knowledge, this design represents the first real-time photon counting receiver implementation on a conventional SiPM and an FPGA, enhancing its potential for IoT applications compared to previous offline approaches [10],[11], [12], [26], [27].
In this paper, we have demonstrated a novel real-time SiPM-based receiver with a low bit rate and high sensitivity, which has the potential for low transmitter power consumption. The work provides the evaluations of the analog chain of the receiver to show the potential for lower power consumption. The numerical simulation proves that the required power consumption of the amplifier is approximately 50 mW at 120 MHz GBP. In addition, to further reduce the complexity and power consumption in the digital circuit design, the FPGA implemented an asynchronous photon detection method. Finally, the implementation of interleaved counters in the receiver allows it to receive streaming data without dead time. This design is being implemented on an FPGA and conventional SiPM for the first time to the best of our knowledge, making it more beneficial for utilizing SiPM in IoT applications than previous offline approaches.
To optimize the real-world performance of the real-time SiPM-based receiver for IoT applications, the power consumption of its components was measured. Table II presents the power consumption measurements for the prototyped receiver under a data rate of up to 1 Mbps. It is observed that the SiPM’s power consumption increases with an increase in data rate. This is because the current within the SiPM originates from electrons excited by the detected photons, maintaining a proportionate relationship with the incident light. The ability to achieve a higher data rate depends on detecting more photons. In the meantime, the measured power consumption of the evaluation board was considerably higher than that of the designed circuit due to numerous unused peripheral interfaces, advanced reduced instruction set computer machine (ARM) core, and FPGA sources during the board’s power-up process. To evaluate the power consumption of the designed receiver circuit, separate measurements were taken for the Xilinx ZYNQ 7000 FPGA, first with only the transmitter PRBS generator and then with both the transmitter and receiver implemented. The difference in these values gives an estimate of the power consumption of the digital circuit of the receiver, which is 36 mW. Among the receiver components, the three amplifiers consume the highest amount of power, which is approximately 2 W. Therefore, analyzing the power consumption of the amplifiers should be a focus in sections V and VI.
2) By conducting numerical simulations, this study assessed the GBP of the post-readout circuit within the SiPM-based optical receiver. This assessment complements previous research findings and offers insights into the circuit’s suitability for future low-power consumption applications.
The previous section designed the receiver based on the ideal setup to investigate the SiPM performance. However, the receiver components often contain amplifier blocks and lowpass or bandpass hardware filters, which affect the shape of the SiPM output pulses to the FPGA. To ensure the best transmission performance of the SiPM pulses, three high GBP amplifier blocks were used in the real-time experiments. However, these high-performance amplifiers also increase the receiver’s power consumption, a disadvantage, especially in IoT applications. When an amplifier is selected, the factors such as bandwidth, slew rate and power consumption should be considered. For a single-pole response voltage feedback amplifier, the product of the DC gain and the bandwidth is constant, which has a trade-off with power consumption [38]. In order to minimize the power consumption of the receiver, the effect of the receiver’s GBP on the BER was investigated. Since changing the GBP of each amplifier is not practical due to experimental limitations, the rest of the investigation uses the numerical simulation based on the offline processing method in section II. The captured sample waveforms from the oscilloscope were filtered through a first-order Butterworth low pass filter (LPF) implemented in software with a bandwidth below 1 GHz.
C
Suppose we extrapolate the ≈\approx≈0.05 m/s spent by the spacecraft in the Hohmann-like transfer plus orbital maintenance in the 800 m orbit (tighter than the tightest 1 km orbit of OSIRIS-REx [54]). In that case, the spacecraft could still orbit Bennu, and make similar orbital transfers, for about 227 days before reaching the 9 m/s best scenarios of Takahashi & Scheeres [46]. The point here is not to advise the use of this paper’s exact architecture and mission profile. Instead, it shows that a fully autonomous operation opens new possibilities for asteroid exploration. It is a paradigm shift in the current conservative approach of severely constraining uncertainties before close-proximity.
It is also crucial to emphasize that the comparison of these magnitudes with the OSIRIS-REx mission and other missions hereafter serves only to provide a notion of the order of magnitude of the Δ⁢VΔ𝑉\Delta Vroman_Δ italic_V budget in real mission cases. The intention is only to showcase that the architecture proposed in this study aligns well with the values expected within a similar kind of mission within the current paradigm. Of course, real missions have a lot more requirements, including very strict scientific requirements, that may impose a high burden in terms of the Δ⁢VΔ𝑉\Delta Vroman_Δ italic_V budget.
In addition to these benefits, and more importantly, an autonomous and rapid approach to exploration can shape current scientific asteroid missions to be more cost-effective and time-efficient. Current missions have a conservative and cautious operational profile, often taking months of surveying and slowly approaching the target to constrain the uncertainties to very low levels before the primary goal of the mission [48, 54]. For instance, the OSIRIS-Rex mission took about four months to approach and make a preliminary survey of the asteroid Bennu before being inserted into its first orbit. The preliminary survey had approximately 20 days, in which the spacecraft made multiple flybys at a distance of roughly 7 km to reduce the uncertainty in the asteroid’s mass to 2% before a safe insertion into orbit [54].
We would like to emphasize that our intention is not to advocate for a universal approach of “rapid exploration” in all asteroid missions. Instead, our objective is to illustrate the lack of necessity in minimizing uncertainties to an excessively low level for autonomous robotic spacecraft. We aim to demonstrate that autonomous robotic spacecraft possess the capability to effectively handle uncertainties, thus reducing the time spent solely on uncertainty reduction for navigation purposes. We fully recognize the significance of prolonged periods dedicated to sensor and hardware testing, calibration, detecting contingencies, extensive imaging from various phase angles, and other critical activities.
Well-designed guidance and control laws can allow an autonomous spacecraft to have a bolder operation, even with a higher level of uncertainty in the navigation. On top of that, there is not a significant compromise in budget Δ⁢VΔ𝑉\Delta Vroman_Δ italic_V as one could expect. Therefore, a fully autonomous mission in close-proximity might not need a long 20 days preliminary survey phase like the OSIRIS-REx mission, and its 94 days approach phase could be potentially shortened [54]. It is important to note that a real mission involves various additional requirements, beyond reducing uncertainties to a very low level, that impact the time expenditure during the preliminary survey and approach. However, from a GN&C perspective, our study indicates that there is no indication that autonomous spacecraft studies should follow these same approach times to reduce uncertainties to a very low level.
D
Consider a multirotor UAV with an antenna on the top surface (i.e. the UAV’s surface facing the sky) that is communicating with a ground node. Assume that the UAV moves away from the node. To do this, the multirotor UAV has to tilt in such a way that its bottom surface (i.e. the UAV’s surface facing the ground) is slightly orientated towards the ground node, see Fig. 3. This can fully or partially block the LoS between the antennas of the ground node and of the UAV. In the case of fixed-wing UAVs, airframe shadowing can occur when the UAVs turn. In turning, they usually change their roll by controlling their ailerons. During this manoeuvre, one wing tilts up and the other tilts down. This tilting might temporarily block the LoS with other communication nodes. The airframe shadowing severity, for both types of UAVs, depends on the airframe or wings material, its size, its shape, antenna location on the UAV’s frame, and UAV trajectories. This phenomenon has been observed in practice; but, as mentioned in [95, 96], it has not yet been fully studied.
iii. Mathematical model available: in this case, we only dispose of a mathematical model of the communications channel. In our previous work [4], we considered the problem of a multirotor UAV that must reach some goal while transmitting data to a BS. The only information about the communications channel used for solving the communications-aware trajectory planning was the pathloss model and the p.d.f. of the shadowing. In [119], the authors considered the problem of optimizing the position of a UAV operating as a BS. To solve this, they considered the pathloss model, which is complemented by the Probability Mass Function (p.m.f.) of the LoS. In [120], we considered the problem of mitigating the small-scale fading in an MR communications link by leveraging the knowledge of its p.d.f. and spatial correlation. ∎
where Thovermsuperscriptsubscript𝑇hover𝑚T_{\rm hover}^{m}italic_T start_POSTSUBSCRIPT roman_hover end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT is the time that the UAV hovers over the m𝑚mitalic_mth HL, which depends on the number of HLs, the channel capacity, and the probability of receiving a successful transmission, see [162] for the details; Ttravelsubscript𝑇travelT_{\rm travel}italic_T start_POSTSUBSCRIPT roman_travel end_POSTSUBSCRIPT is the time that the UAV spends in motion which depends mainly on 𝐋𝐋\mathbf{L}bold_L and on 𝐙𝐙\mathbf{Z}bold_Z. In summary, this CaTP problem takes the following form:
Consider a multirotor UAV with an antenna on the top surface (i.e. the UAV’s surface facing the sky) that is communicating with a ground node. Assume that the UAV moves away from the node. To do this, the multirotor UAV has to tilt in such a way that its bottom surface (i.e. the UAV’s surface facing the ground) is slightly orientated towards the ground node, see Fig. 3. This can fully or partially block the LoS between the antennas of the ground node and of the UAV. In the case of fixed-wing UAVs, airframe shadowing can occur when the UAVs turn. In turning, they usually change their roll by controlling their ailerons. During this manoeuvre, one wing tilts up and the other tilts down. This tilting might temporarily block the LoS with other communication nodes. The airframe shadowing severity, for both types of UAVs, depends on the airframe or wings material, its size, its shape, antenna location on the UAV’s frame, and UAV trajectories. This phenomenon has been observed in practice; but, as mentioned in [95, 96], it has not yet been fully studied.
The communications channel gain depends on the relative orientation of the transmitting and receiving antennas. During the flying phase, a multirotor UAV must tilt, thus changing its antenna orientation. As a consequence, the communication channel observed when a multirotor UAV hovers is different than when they move [99], see Fig. 4. Furthermore, the contribution on the antenna channel gain will vary with the motion of the UAV, see [98] for more details. Similarly, during turning manoeuvres, a fixed-wing UAV has to tilt, thus changing its antenna orientation, see Fig. 5. The communications channel observed when fixed-wing UAVs move on a straight line is different than when they are turning. We also note that the location and orientation of the antenna on the UAV has a significant impact on the communications channel, as shown experimentally in [100, 101, 102, 103].
D
In the case where Σ2subscriptΣ2\Sigma_{2}roman_Σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is static or stability of x2∗=0superscriptsubscript𝑥20x_{2}^{*}=0italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = 0 is of no concern, the dissipativity conditions (i)-(iv) in Theorem 20 for Σ2subscriptΣ2\Sigma_{2}roman_Σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT can be simplified by omitting x2subscript𝑥2x_{2}italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT as in (6) and restricting 𝒳𝒳\mathcal{X}caligraphic_X to be 𝒳1subscript𝒳1\mathcal{X}_{1}caligraphic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT in Assumption 12 or 14 and Theorem 20. In this case, stability of x1∗=0superscriptsubscript𝑥10x_{1}^{*}=0italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = 0 may be established with S⁢(x1,z)=S1⁢(x1,z1)+S2⁢(z2)𝑆subscript𝑥1𝑧subscript𝑆1subscript𝑥1subscript𝑧1subscript𝑆2subscript𝑧2S(x_{1},z)=S_{1}(x_{1},z_{1})+S_{2}(z_{2})italic_S ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_z ) = italic_S start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_z start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) + italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) by looking at the closed-loop map from w1subscript𝑤1w_{1}italic_w start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT to y1subscript𝑦1y_{1}italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT.
Interestingly, asymptotic stability of the feedback system may be established using a type of strict dissipativity where the strictness is derived from
Feedback stability in the sense of Lyapunov often leaves much to be desired. Next, we examine the stronger notion of asymptotic feedback stability via dissipativity.
IQCs, whereas the dynamics of the auxiliary system facilitate the verification of the dissipativity of the system with respect to the supply rate in
of dissipativity so that the stronger notion of asymptotic stability of Σ1∥Σ2conditionalsubscriptΣ1subscriptΣ2\Sigma_{1}\|\Sigma_{2}roman_Σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∥ roman_Σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT can be established. It is worth noting that
A
For a stochastic system, a subset of the state space is generally hard to be (almost sure) invariance because the diffusion coefficient is required to be zero at the boundary of the subset111The detail is discussed in[18], which aims to make the state of a stochastic system converge to the origin with probability one and confine the state in a specific subset with probability one. The aim is a little like the aim of a control barrier function. Tamba et al. make a similar argument for CBFs in [19], but their sufficient condition is more stringent.. To avoid the tight condition for the coefficient, we should design a state-feedback law whose value is massive, namely diverge in general, at the boundary of the subset so that the effect of the law overcomes the disturbance term. Moreover, a functional ensuring the (almost sure) invariance of the subset probably diverges at the boundary of the set as with a global stochastic Lyapunov function [22, 23, 24] and an RCBF.
On the other hand, the CBF approach is closely related to a control Lyapunov function (CLF), which immediately provides a stabilizing control law from the CLF, as in Sontag [16] for deterministic systems and Florchinger [17] for stochastic systems. Therefore, in the CBF approach, the derivation of a safety-critical control law immediately from the CBF is also important. For this discussion, the problem setting in which the safe set is coupled with the CBF is appropriate, as in Ames et al. [2]. The stochastic version of the Ames’s et al.’s result is recently discussed by Clark [12]; he insists that his RCBF and ZCBF guarantee the safety of a set with probability one. At the same time, Wang et al. [13] analyze the probability of a time when the sample path leaves a safe set under conditions similar to Clark’s ZCBF. Wang et al. also claim that a state-feedback law achieving safety with probability one often diverges toward the boundary of the safe set; the inference is also obtained from the fact that the conditions for the existence of an invariance set in a stochastic system are strict and influenced by the properties of the diffusion coefficients [18]. This argument is in the line of stochastic viability by Aubin and Prato [20]. For CBFs, Tamba et al. [19] provides sufficient conditions for safety with probability one, which require difficult conditions for the diffusion coefficients. Therefore, we need to reconsider a sufficient condition of safety with probability one, and we also need to rethink the problem setup to compute the safety probability obtained by a bounded control law.
The above discussion also implies that if a ZCBF is defined for a stochastic system and ensures “safety with probability one,” the good robust property of the ZCBF probably gets no appearance. The reason is that the related state-feedback law generally diverges at the boundary of the safe set. Hence, the previous work in [13] proposes a ZCBF with analysis of exit time of
In Section 4, first, we propose an AS-RCBF and an AS-ZCBF ensuring the invariance of a safe set with probability one. Second, we design a safety-critical control ensuring the existence of an AS-RCBF and an AS-ZCBF and show that the controller diverges towards the boundary of a safe set. Third, we construct a new type of a stochastic ZCBF clarifying a probability for the invariance of a safe set and showing the convergence of a specific expectation related to the attractiveness of a safe set from the outside of the set.
For a stochastic system, a subset of the state space is generally hard to be (almost sure) invariance because the diffusion coefficient is required to be zero at the boundary of the subset111The detail is discussed in[18], which aims to make the state of a stochastic system converge to the origin with probability one and confine the state in a specific subset with probability one. The aim is a little like the aim of a control barrier function. Tamba et al. make a similar argument for CBFs in [19], but their sufficient condition is more stringent.. To avoid the tight condition for the coefficient, we should design a state-feedback law whose value is massive, namely diverge in general, at the boundary of the subset so that the effect of the law overcomes the disturbance term. Moreover, a functional ensuring the (almost sure) invariance of the subset probably diverges at the boundary of the set as with a global stochastic Lyapunov function [22, 23, 24] and an RCBF.
B
\approx\bar{\sigma}({\bf Z}_{\rm PI}(s))\approx-40~{}{\rm dB}=0.01over¯ start_ARG italic_σ end_ARG ( bold_Z start_POSTSUBSCRIPT roman_droop end_POSTSUBSCRIPT ( italic_s ) ) ≈ over¯ start_ARG italic_σ end_ARG ( bold_Z start_POSTSUBSCRIPT roman_GFM end_POSTSUBSCRIPT ( italic_s ) ) ≈ over¯ start_ARG italic_σ end_ARG ( bold_Z start_POSTSUBSCRIPT roman_PI end_POSTSUBSCRIPT ( italic_s ) ) ≈ - 40 roman_dB = 0.01 at 10 Hz in Fig. 3. When considering VSMs with reactive power droop control, virtual impedance, and damping enhancement, the reactance is 0.04 pu, 0.03 pu, and 0.02 pu, respectively, since σ¯⁢(𝐙GFM−QD⁢(s))≈−26⁢dB≈0.04¯𝜎subscript𝐙GFMQD𝑠26dB0.04\bar{\sigma}({\bf Z}_{\rm GFM-QD}(s))\approx-26~{}{\rm dB}\approx 0.04over¯ start_ARG italic_σ end_ARG ( bold_Z start_POSTSUBSCRIPT roman_GFM - roman_QD end_POSTSUBSCRIPT ( italic_s ) ) ≈ - 26 roman_dB ≈ 0.04, σ¯⁢(𝐙GFM−VI⁢(s))≈−30⁢dB≈0.03¯𝜎subscript𝐙GFMVI𝑠30dB0.03\bar{\sigma}({\bf Z}_{\rm GFM-VI}(s))\approx-30~{}{\rm dB}\approx 0.03over¯ start_ARG italic_σ end_ARG ( bold_Z start_POSTSUBSCRIPT roman_GFM - roman_VI end_POSTSUBSCRIPT ( italic_s ) ) ≈ - 30 roman_dB ≈ 0.03, and σ¯⁢(𝐙GFM−damp⁢(s))≈−32⁢dB≈0.02¯𝜎subscript𝐙GFMdamp𝑠32dB0.02\bar{\sigma}({\bf Z}_{\rm GFM-damp}(s))\approx-32~{}{\rm dB}\approx 0.02over¯ start_ARG italic_σ end_ARG ( bold_Z start_POSTSUBSCRIPT roman_GFM - roman_damp end_POSTSUBSCRIPT ( italic_s ) ) ≈ - 32 roman_dB ≈ 0.02 at 10 Hz in Fig. 3.
In this paper, to ensure the generality of the proposed approach, we consider GFM converters with different implementations, such as droop control, power synchronization control, and VSMs (w/wo reactive power droop control [23], virtual impedance [24], and damping enhancement [25, 26]). We focus on the voltage source behavior of GFM converters which helps improve the system’s small signal stability dominated by GFL converters.
Rather than changing the power network, we use GFM converters under power synchronization control or VSMs (w/wo reactive power droop control), respectively, to improve the power grid strength and stabilize the system according to Proposition IV.1. Fig. 8, Fig. 9, and Fig. 10 show the responses of the system with different capacity ratios γ𝛾\gammaitalic_γ under different GFM methods, respectively. There is a voltage disturbance from the infinite bus at t = 0.2 s (a voltage sag of 5% that lasts 10 ms). It can be seen that the damping ratio of the system is improved when a larger γ𝛾\gammaitalic_γ is adopted (i.e., with more GFM converters), and the system has satisfactory performance with γ=17.8%𝛾percent17.8\gamma=17.8\%italic_γ = 17.8 % or γ=21.4%𝛾percent21.4\gamma=21.4\%italic_γ = 21.4 % (aligned with Example 1). Furthermore, it can be confirmed that the γ𝛾\gammaitalic_γ under VSMs with reactive power droop control needs to be larger to achieve similar damping performance, compared with GFM converters under power synchronization control and VSMs without reactive power droop control.
We consider the scenario where the system is unstable with gSCR=gSCR0=1.1gSCRsubscriptgSCR01.1{\rm gSCR}={\rm gSCR}_{0}=1.1roman_gSCR = roman_gSCR start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 1.1 (i.e., γ=0𝛾0\gamma=0italic_γ = 0) at the 35 kV bus. The other settings for the power grid in Fig. 7 are the same as those described above. Fig. 12 shows the responses of the system with different capacity ratios γ𝛾\gammaitalic_γ. It can be seen that the damping ratio of the system is improved when a larger γ𝛾\gammaitalic_γ is adopted (i.e., with more GFM converters), and the system has satisfactory performance with γ=4.8%𝛾percent4.8\gamma=4.8\%italic_γ = 4.8 % (aligned with Example 2). To validate our analysis in Section II, Fig. 13 displays the responses of the system (active and reactive power of wind farm 1) under a voltage disturbance (a voltage sag of 5% at the infinite bus that lasts 1 ms), in which we change VSMs without reactive power droop control to GFL converters with constant AC voltage control (γ=4.8%𝛾percent4.8\gamma=4.8\%italic_γ = 4.8 %). It can be seen that the system became unstable if, instead of installing VSMs without reactive power droop control, one chooses to install GFL converters with constant AC voltage control. The reason behind is that even with constant AC voltage control, GFL converters can only exhibit 1D-VS behaviors due to its control structure and thus cannot enhance the power grid strength, as discussed in Section II. By comparison, VSMs without reactive power droop control have 2D-VS behaviors and can effectively enhance the power grid strength.
In this paper, to test the generality and effectiveness of the proposed approach when considering GFM converters under different implementations, we will consider power synchronization control and VSMs w/wo reactive power droop control in the analysis and simulation studies to quantify how they improve the small signal stability of the system, where VSMs without reactive power droop control belongs to the category of VSMs without additional control methods as mentioned above.
D
Table 7: Quantitative comparison (average PSNR/SSIM) with state-of-the-art approaches for tiny/light image SR on benchmark datasets (×\times×4). The best and second best performances are highlighted and underlined, respectively.
In Fig. 7, we also exhibit the visual results of several tiny/lightweight models on Urban100 (×\times×4). For img_078, the tiny and light models are tested with the patches framed by green and red boxes, respectively. Generally, MANs can restore the texture better and clearer than other methods.
To validate the effectiveness of our MAN, we compare our normal model to several SOTA classical ConvNets [58, 8, 59, 41, 40, 37]. We also add SwinIR [30] for reference. In Tab. 6, the quantitative results show that our MAN exceeds other convolutional methods to a large extent. The maximum improvement on PSNR reaches 0.69 dB for ×\times×2, 0.77 dB for ×\times×3, and 0.81 dB for ×\times×4. Moreover, we compare our MAN with SwinIR. For ×\times×2, our MAN achieves competitive or even better performance than SwinIR. The PSNR value on Manga109 is boosted from 39.92 dB to 40.02 dB. For ×\times×4, MAN is slightly behind SwinIR because the latter uses the ×\times×2 model as the pre-trained model. More importantly, MAN is significantly smaller than existing methods.
Overall study on components of MAN. In Tab. 2, we present the results of deploying the proposed components on our tiny and light networks. In general, the best performances are achieved by employing all proposed modules. Specifically, 0.25 dB and 0.29 dB promoting on Urban100 [18] can be observed in MAN-tiny and MAN-light, while the parameters and calculations increase negligibly. Among these components, the LKAT module and multi-scale mechanism are more important to enhance quality. Without any of them, the PSNR will drop by 0.09 dB. The GSAU is an economical replacement for MLP. It reduces 15K parameters and 3.6G calculations while bringing significant improvements across all datasets.
To verify the efficiency and scalability of our MAN, we compare MAN-tiny and MAN-light to some state-of-the-art tiny [12, 26, 56, 44, 27] and lightweight [19, 36, 52, 30, 57] SR models. Tab. 7 presents the numerical results that our MAN-tiny/light outperforms all other tiny/lightweight methods. Specifically, MAN-tiny exceeds second place by about 0.2 dB on Set5, Urban100, and Manga109, and around 0.07 dB on Set14 and BSD100. We also list EDSR-baseline [31] for reference. Our tiny model has less than 150K parameters but achieves a similar restoration quality with EDSR-baseline, which is 10×\times× larger than ours. Similarly, our MAN-light surpasses both CNN-based and transformer-based SR models. In comparison with IMDN (CNN) and SwinIR-light/ELAN-light (Transformer), our model leads by 0.66 dB/0.23 dB on Urban100 (×\times×4) benchmark. Moreover, our MAN-light is superior to traditional performance-oriented EDSR. In detail, the proposed model takes only 2% of the parameters and computations of EDSR while having high PSNR on all benchmarks.
D
For the system safety analysis, we are interested in computing the BRT of ℒ⁢(βL)ℒsubscript𝛽𝐿\mathcal{L}(\beta_{L})caligraphic_L ( italic_β start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT ) given dynamics in (1).
BRT is the set of states such that the system trajectories that start from this set will eventually reach the given target set despite the worst-case disturbance (or an exogenous, adversarial input more generally).
Backward Reachable Tube (BRT): the set of initial states of the system for which the agent acting optimally and under worst-case disturbances will eventually reach the target set ℒℒ\mathcal{L}caligraphic_L within the time horizon [t,T]𝑡𝑇[t,T][ italic_t , italic_T ] :
The BRT for this collision set corresponds to all the states from which the pursuer can drive the system trajectory into the collision set within the time horizon [t,T]𝑡𝑇[t,T][ italic_t , italic_T ], despite the best efforts of the evader to avoid a collision.
First, a target function l⁢(x)𝑙𝑥l(x)italic_l ( italic_x ) is defined whose sub-zero level set is the target set ℒℒ\mathcal{L}caligraphic_L, i.e. ℒ={x:l⁢(x)≤0}ℒconditional-set𝑥𝑙𝑥0\mathcal{L}=\{x:l(x)\leq 0\}caligraphic_L = { italic_x : italic_l ( italic_x ) ≤ 0 }. Typically, l⁢(x)𝑙𝑥l(x)italic_l ( italic_x ) is defined as a signed distance function to ℒℒ\mathcal{L}caligraphic_L. The BRT seeks to find all states that could enter ℒℒ\mathcal{L}caligraphic_L at any point within the time horizon and, therefore might be unsafe. This is computed by finding the minimum distance to ℒℒ\mathcal{L}caligraphic_L over time:
B
Another approach is to intentionally use broken (zig-zag) multi-hop trajectories to mislead the attacker or avoid risk areas.
The use of distributed antennas is a common approach to address the coverage issue. The fronthaul connection that is needed between the central node and the remote radio heads is highly challenging due to its high bandwidth and stringent latency requirements. It is generally implemented by an optical network. RIS-based wireless networks can be regarded as a more cost-effective alternative for implementing a distributed antenna system with integrated access and fronthaul. This can be enabled by the following distributed network components.
After highlighting several advantages of the directive RIS architecture, we shall discuss its disadvantages as compared to the reflective RIS configuration. In addition, to the need for a (metasurface) lens for analog DFT processing, the major issue is the need for longer RF interconnections (see Fig. 7) and a multistage-switching network for conductive RF routing which is in general quite challenging at high frequencies. Switching matrices are used in several applications such as satellite communications [37]. As the frequency and the number of ports increase, however, the losses of signal traces and switches become overwhelming, and designing a printed circuit board (PCB) layout with global interconnections and with minimal signal integrity issues is no easy task.
In practice, real-time reconfigurability in the range of milliseconds might be still difficult to achieve as it requires stringent timing requirements for the control channel. Alternatively, beam-hopping techniques that are popular in satellite communications [34] can be considered. Beam-hopping consists of serving sequentially users spots in turn according to a predetermined schedule. The periodic beam hopping time plan can be determined and updated based on the varying traffic demand and the RIS scattering pattern can be optimized based on long-term statistical channel information [35] which also reduces the training overhead (c.f. Section IV-A). Therefore, the reconfiguration needs to be done only occasionally with long cycle times and the requirements on the control channel are significantly relaxed. To allow for initial access, all potential beam directions are sequentially illuminated and scanned (beam sweeping) during multiple synchronization signal blocks (SSB). This results in substantial initial access latency and a long beam-hopping period. Therefore, the RIS node is designed to support a medium number of wide initial access wide beams or, alternatively, a permanent directive link is dedicated between the access point and the RIS node. While the control overhead is reduced, synchronous operation (for instance via GPS) between the RIS nodes and the donor nodes is still required. A notable advantage of the redirective RIS system is the simultaneous beam hopping of multiple beams at full aperture gain, particularly when the RIS node is shared among several donor sites (e.g. Fig 2) as explained in the next subsection.
We introduced the concept of nonlocal or redirective reconfigurable surfaces with low-rank scattering as an artificial wave-guiding structure for wireless wave propagation at high frequencies. We showed multiple functionalities that can be implemented, including beam bending, multi-beam data forwarding, wave amplification, routing, splitting, and combining. The observed results indicate that transformation-based intelligent surfaces can make mmWave and THz networks more salable, secure, flexible, and robust while being energy, cost, and spectrally efficient. Mitigating the coverage issue of these frequency bands can be considered a critical milestone in the evolution of terrestrial wireless mobile access. Other than the improved coverage, RIS-based remote nodes can also improve the network capacity due to the extremely high directional propagation and the possibility for massive spatial multiplexing with massive MIMO at the central macro baseband node. This enables tens or even hundreds of bits per hertz and kilometer square area spectral efficiency for mmWaves at low cost and high coverage. While lens-based RIS offers much better performance in terms of signal processing efficiency, its bulkiness (particularly in the case of 3D beamforming) and scalability issues (due to the longer RF interconnections and switching implementation) might be disadvantageous
D
In the VR display task, the central server transmits virtual 360∘superscript360360^{\circ}360 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT video streaming to the user. To avoid the transmission of the whole 360∘superscript360360^{\circ}360 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT video, the central server can predict the eye movements of the user and extract the corresponding FoV as goal-oriented semantic information. Apart from the PSNR and SSIM mentioned in AR, timing accuracy and position accuracy are also important effectiveness-aware performance metrics to avoid cybersickness including: 1) initial delay: time difference between the start of head motion and that of the corresponding feedback; 2) settling delay: time difference between the stop of head motion and that of the corresponding feedback; 3) precision: angular positioning consistency between physical movement and visual feedback in terms of degrees; and 4) sensitivity: capability of inertial sensors to perceive subtle motions and subsequently provide feedback to users.
Due to the difficulty in supporting massive haptic data with stringent latency requirements, JND can be identified as important goal-oriented semantic information to ignore the haptic signal that cannot be perceived by the manipulator. Two effectiveness-aware performance metrics including SNR and SSIM have been verified to be applicable to vibrotactile quality assessment.
To implement a closed-loop XR-aided teleoperation system, the wireless network is required to support mixed types of data traffic, which includes control and command (C&C) transmission, haptic information feedback transmission, and rendered 360∘superscript360360^{\circ}360 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT video feedback transmission [14]. As XR-aided teleoperation task relies on both parallel and consecutive communication links, how to guarantee the cooperation among these communication links to execute the task is of vital importance. Specifically, the parallel visual and haptic feedback transmissions should be aligned with each other when arriving at the manipulator, and consecutive C&C and feedback transmissions should be within the motion-to-photon delay constraint, which is defined as the delay between the movement of the user’s head and the change of the VR device’s display reflecting the user’s movement. Either violation of alignment in parallel links or latency constraint in consecutive links will lead to a break in presence (BIP) and cybersickness. Therefore, both parallel alignment and consecutive latency should be quantified into effectiveness-aware performance metrics to guarantee the success of XR-aided teleoperation. Moreover, due to the motion-to-photon delay, the control error between the expected trajectory and the actual trajectory will accumulate along with the time, which may lead to task failure. Hence, how to alleviate the accumulated error remains an important challenge that needs to be solved.
Haptic communication has been incorporated by industries to perform grasping and manipulation, where the robot transmits the haptic data to the manipulator. The shape and weight of the objects to be held are measured using cutaneous feedback derived from the fingertip contact pressure and kinesthetic feedback of finger positions, which should be transmitted within stringent latency requirements to guarantee industrial operation safety.
In the scenario of a swarm of (autonomous) robots where they need to perform a collaborative task (or a set of tasks) within a deadline over a wireless network, an effective communication protocol that takes into account the peculiarities of such a scenario is needed. Considering the simple case of two robots, let’s say Robot A and Robot B that are communicating through a wireless network and they are not collocated. Robot A controls remotely Robot B such that to execute a task and the outcome of that operation will be fed to Robot A for performing a second operation to send the outcome back to Robot B. All this must happen within a strict deadline. The amount of information that is generated, transmitted, processed, and sent back can be very large with the traditional information agnostic approach. On the other hand, if we take into account the semantics of information and the purpose of communication, we change the whole information chain, from its generation point until its utilization. Therefore, defining goal-oriented semantic metrics for the control loop and communication between a swarm of (autonomous) robots is crucial and it will significantly reduce the amount of information leading to a more efficient operation.
C
The second test case is the 33-bus system case33bw, which has multiple branches. In this example, we demonstrate the efficacy of our approach in handling a system with complex components through the implementation of volt-VAR control, which represents smarter inverter behavior (whose characteristics are described in osti2016 ). To incorporate the behavior of volt-VAR control, we enhance the power flow solver used to compute the CLAs by integrating an additional fixed-point iterative method. Table 4.1 shows the computation times for the bilinear and the two MILP formulations. We exclude the computation time for the KKT formulation since the solver fails to find even a feasible (but potentially suboptimal) point within 55000 seconds (15 hours). Our final test case is the 141-bus system case141. Similar to the 33-bus system, the solver could not find the optimal solution for the KKT formulation within a time limit of 15 hours. It is evident the KKT formulation is intractable. Table 4.1 again shows the results for this test case, and Figs. 0b and 0c compare the computation times for the bilinear and MILP formulations.
The first test case is the 10-bus system case10ba, a simple single-branch network. We consider a variant where the nominal loads are 60%percent6060\%60 % of the values in the Matpower file. The results from each formulation place a sensor at the end of the branch (furthest bus from the substation) with an alarm threshold of 0.90.90.90.9 per unit (at the voltage limit). Fig. 0a compares computation times from the three formulations. The the KKT formulation takes 26.7 seconds while the bilinear and MILP formulations take 1.96 and 1.54 seconds, respectively. Since the sensor threshold for the KKT and MILP formulations is at the voltage limit, AGD is not needed. Conversely, the bilinear formulation gives a higher alarm threshold. As a result, the AGD method is applied as a post-processing step to achieve the lowest possible threshold without introducing false alarms. The number of false positives reduces from 5.48%percent5.485.48\%5.48 % to 0%percent00\%0 %. Executing the AGD method takes 0.11 seconds.
The second test case is the 33-bus system case33bw, which has multiple branches. In this example, we demonstrate the efficacy of our approach in handling a system with complex components through the implementation of volt-VAR control, which represents smarter inverter behavior (whose characteristics are described in osti2016 ). To incorporate the behavior of volt-VAR control, we enhance the power flow solver used to compute the CLAs by integrating an additional fixed-point iterative method. Table 4.1 shows the computation times for the bilinear and the two MILP formulations. We exclude the computation time for the KKT formulation since the solver fails to find even a feasible (but potentially suboptimal) point within 55000 seconds (15 hours). Our final test case is the 141-bus system case141. Similar to the 33-bus system, the solver could not find the optimal solution for the KKT formulation within a time limit of 15 hours. It is evident the KKT formulation is intractable. Table 4.1 again shows the results for this test case, and Figs. 0b and 0c compare the computation times for the bilinear and MILP formulations.
To address challenges associated with power flow nonlinearities, we employ a linear approximation of the power flow equations that is adaptive (i.e., tailored to a specific system and a range of load variability) and conservative (i.e., intend to over- or under-estimate a quantity of interest to avoid constraint violations). These linear approximations are called conservative linear approximations (CLAs) and were first proposed in BUASON2022 . As a sample-based approach, the CLAs are computed using the solution to a constrained regression problem across all samples within the range of power injection variability. They linearly relate the voltage magnitudes at a particular bus to the power injections at all PQ buses. These linear approximations can also effectively incorporate the characteristics of more complex components (e.g., tap-changing transformers, smart inverters, etc.), only requiring the ability to apply a power flow solver to the system. Additionally, in the context of long-term planning, the CLAs can be readily computed with knowledge of expected DER locations and their potential power injection ranges. The accuracy and conservativeness of our proposed method is based on the information of the location of DERs and their power injections variability. As inputs, our method uses the net load profiles including the size of PVs when computing the CLAs. In practice, this data can be obtained by leveraging the extensive existing research on load modeling and monitoring to identify the locations and capabilities of behind-the-meter devices (refer to, e.g., Grijalva2021 ; Schirmer2023 ).
Table 4.1 shows both the computation times and the results of randomly drawing sampled power injections within the specified range of variability, computing the associated voltages by solving the power flow equations, and finding the number of false positive alarms (i.e., the voltage at a bus with a sensor is outside the sensor’s threshold but there are no voltage violations in the system). The results for the 33-bus and 141-bus test cases given in Table 4.1 illustrate the performance of the proposed reformulations. Whereas the KKT formulation is computationally intractable, our proposed reformulations find solutions within approximately one minute, where the MILP formulation with the BVR method typically exhibits the fastest performance. The solutions to the reformulated problems place a small number of sensors (two to four sensors in systems with an order of magnitude or more buses). No solutions suffer from false negatives since all samples where there is a voltage violation trigger an alarm. There are a number of false alarms prior to applying the AGD that after its application decrease dramatically to a small fraction of the total number of samples (1.34%percent1.341.34\%1.34 % and 0.01%percent0.010.01\%0.01 % in the 33-bus and the 141-bus systems, respectively). These observations suggest that our sensor placement formulations provide a computationally efficient method for identifying a small number of sensor locations and associated alarm thresholds that reliably identify voltage constraint violations with no false negatives (missed alarms) and few false positives (spurious alarms).
D
We have employed an advanced classification-based DOA estimation algorithm that is free of quantization errors. The backbone network is CNN, where a mask layer is used to enhance the robustness of the DOA estimation. Furthermore, to improve the accuracy of the DOA estimation of the CNN-based classification model, we incorporate a quantization-error-free soft label encoding and decoding strategy.
Consider a room with an ad-hoc microphone array of N𝑁Nitalic_N nodes and B𝐵Bitalic_B speakers, where each node comprises a conventional array of M𝑀Mitalic_M microphones.
We recorded a real-world dataset named Libri-adhoc-node10. It contains a conference room and an office room. Each room has 10 ad-hoc nodes and a loudspeaker. Each node contains a 4-channel linear array with an aperture of 8cm. Fig. 4 shows the recording environment of the two rooms. The size of the office room is approximately 9.8×10.3×4.29.810.34.29.8\times 10.3\times 4.29.8 × 10.3 × 4.2m with T≈601.39{}_{60}\approx 1.39start_FLOATSUBSCRIPT 60 end_FLOATSUBSCRIPT ≈ 1.39s. The size of the conference room is approximately 4.26×5.16×3.164.265.163.164.26\times 5.16\times 3.164.26 × 5.16 × 3.16m with T≈601.06{}_{60}\approx 1.06start_FLOATSUBSCRIPT 60 end_FLOATSUBSCRIPT ≈ 1.06s. It records the ‘test-clean’ subset of the LibriSpeech data replayed by the loudspeaker in the rooms, which contains 20 male speakers and 20 female speakers. The ad-hoc nodes and the loudspeaker have the same height of 1.31.31.31.3m. The ambient noise of the recording environments can be ignored. The detailed description of the data and its open source, which includes the speaker ID and positions, microphone node positions, self-rotation angles, etc, will be released in https://github.com/Liu-sp/Libri-adhoc-nodes10.
We have recorded a real-world dataset named Libri-adhoc-nodes10. The Libri-adhoc-nodes10 dataset is a 432-hour collection of replayed speech of the “test-clean” subset of the Librispeech corpus [32], where an ad-hoc microphone array with 10 nodes were placed in an office and a conference room respectively. Each node is a linear array of four microphones. For each room, 4 array configurations with 10 distinct speaker positions per configuration were designed.
For the test sets, we need to generate simulated data for ad-hoc microphone arrays, whose ad-hoc nodes are either circular arrays or linear arrays. Specifically, for each randomly generated room, we repeated the procedure of constructing the training data, except that (i) we randomly placed 10 ad-hoc nodes in the room and (ii) we placed B𝐵Bitalic_B speakers in the room with B={1,2}𝐵12B=\{1,2\}italic_B = { 1 , 2 }. We added diffuse noise with an SNR level randomly selected from [10,20,30]102030[10,20,30][ 10 , 20 , 30 ] dB. The SNR was calculated as an energy ratio of the average direct sound of all microphone channels to the diffuse noise. Note that, due to the potential large difference in distances between the nodes and speakers, the SNR at the nodes could vary in a wide range. Each test set consists of 1,200 utterances. To study the effects of different types of microphone arrays on performance, for each randomly generated test room, we applied exactly the same environmental setting (including the speech source, room environment, speaker positions, microphone node positions and self-angles) to both circular-array-based ad-hoc nodes and linear-array-based ad-hoc nodes.
C
The even coding model also has the potential to adapt to binocular vision data by incorporating an additional input dimension of size two.
As a result, the question of whether these methods are principled or reflect crucial features of biological systems is often sidelined or deemed irrelevant.
Investigating whether the model can detect binocular disparity or even construct a 3D model of the world would be fascinating.
The even coding model also has the potential to adapt to binocular vision data by incorporating an additional input dimension of size two.
after the model has been trained the vast majority of the output values are either at 0 or 1, signifying that our model encoded the images using binary representation.
B
\hat{\uppi}(\mathbf{x})\quad\vspace{-0.65em}italic_u = roman_π ( italic_I , bold_x , italic_E ) = roman_π ( italic_S ( bold_x , italic_E ) , bold_x , italic_E ) ⟹ italic_u = over^ start_ARG roman_π end_ARG ( bold_x )
Specifically, given the set of undesirable states 𝒪𝒪\mathcal{O}caligraphic_O, the sensor mapping can be composed with the vision-based controller to obtain the closed-loop, state-feedback policy, π^^π\hat{\uppi}over^ start_ARG roman_π end_ARG for a given environment:
The complement of the BRAT thus represents the unsafe states for the robot under π^^π\hat{\uppi}over^ start_ARG roman_π end_ARG.
Given the policy π^^π\hat{\uppi}over^ start_ARG roman_π end_ARG, we compute the BRT 𝒱𝒱\mathcal{V}caligraphic_V by solving the HJB-VI in (7).
Finally, a model-based spline planner P𝑃Pitalic_P takes in the predicted waypoint to produce a smooth control profile for the robot. Hence, the closed-loop policy π^^π\hat{\uppi}over^ start_ARG roman_π end_ARG is given by π^:=P∘C∘S⁢(𝐱,g,E)assign^π𝑃𝐶𝑆𝐱𝑔𝐸\hat{\uppi}:=P\circ C\circ S(\mathbf{x},g,E)over^ start_ARG roman_π end_ARG := italic_P ∘ italic_C ∘ italic_S ( bold_x , italic_g , italic_E ).
C
An upward pointing arrow leaving node (t,u)𝑡𝑢(t,u)( italic_t , italic_u ) represents y⁢(t,u)𝑦𝑡𝑢y(t,u)italic_y ( italic_t , italic_u ), the probability of outputting an actual label; and a rightward pointing arrow represents Ø⁢(t,u)italic-Ø𝑡𝑢\O(t,u)italic_Ø ( italic_t , italic_u ), the probability of outputting a blank at (t,u)𝑡𝑢(t,u)( italic_t , italic_u ).
In standard decoding algorithms for RNN-Ts, the emission of a blank symbol advances input by one frame.
introduces big blank symbols. Those big blank symbols could be thought of as blank symbols with explicitly defined durations – once emitted, the big blank advances the t𝑡titalic_t by more than one, e.g. two or three.
Note that when outputting an actual label, u𝑢uitalic_u would be incremented by one; and when a blank is emitted, t𝑡titalic_t is incremented by one.
With the multi-blank models, when a big blank with duration m𝑚mitalic_m is emitted, the decoding loop increments t𝑡titalic_t by exactly m𝑚mitalic_m.
C
The utterances of training, development and seen test set in the noisy LA dataset are generated based upon that of training, development and test set from the LA dataset, respectively. The utterances in these three sets are generated by using six scenes: Airport, Bus, Park, Public, Shopping, Station. The voices of unseen test set are simulated with four scenes: Metro, Pedestrian, Street, Tram.
The acoustic scenes are randomly sampled to mix with the bona fide and spoofed utterances at 6 different SNRs each: -5dB, 0dB, 5dB, 10dB, 15dB and 20dB.
The fake utterances are generated by mixing another randomly sampled acoustic scenes with the enhanced utterances each mixed with 6 different SNRfake -5dB, 0dB, 5dB, 10dB, 15dB and 20dB. Fake utterances are also generated by using an open-source toolkit Augly.
The real utterances of our training, development and test sets are generated based upon the bona fide ones of training, development and test sets from the LA dataset, respectively. They are generated by randomly adding acoustic scenes to clean utterances at 6 different SNRfake each -5dB, 0dB, 5dB, 10dB, 15dB and 20dB.
The statistics of real and fake utterances in our SceneFake dataset at different SNRs are reported in Tables 4 and  5, where #-5dB, #0dB, #5dB, #10dB, #15dB and #20dB denote the number of real or fake utterances at 6 different SNRs each -5dB, 0dB, 5dB, 10dB, 15dB and 20dB.
A
[4, 5]. The technique discussed in this paper, building upon the preliminary idea introduced in [1], uses a system realization that is based on the “information-state” as the state vector. An ARMA model which can represent the current output in terms of inputs and outputs from q𝑞qitalic_q steps in the past, is found by solving a linear regression problem relating the input and output data. Defining the state vector to be the past inputs and outputs, as the information-state, lets us realize a state-space model directly using the estimated time-varying ARMA parameters.
The pioneering work in system identification for LTI systems is the Ho-Kalman realization theory [6] of which the Eigensystem Realization Algorithm (ERA) algorithm is one of the most popular  [4]. Another system identification method, namely, q𝑞qitalic_q-Markov covariance equivalent realization, generates a stable LTI system model that matches the first “q𝑞qitalic_q” Markov parameters of the underlying system and also matches the equivalent steady-state covariance response/ parameters of the identified system [7, 8]. These algorithms assume stable systems so that the response can be modeled using a finite set of parameters relating the past inputs to the current output (moving-average (MA) model). For lightly damped and marginally stable systems, the length of history to be considered and the parameters to be estimated becomes very long, leading to numerical issues when solving for the parameters. To overcome this issue, the observer Kalman identification algorithm (OKID) [9] uses an ARMA model, rather than an MA model, consisting of past outputs and controls to model the current output. The time-varying counterparts of the ERA and OKID - TV-ERA and TV-OKID - were developed in [10] and [11], respectively. The identification of time varying linear systems (TV-ERA and TV-OKID) also builds on the earlier work on time-varying discrete time system identification [5, 12]. The OKID and TV-OKID explain the usage of an ARMA model to be equivalent to an observer in the loop system, and postulate that the identified observer is a deadbeat observer similar to the work in [13].
The results show that the information-state model can predict the responses accurately. The TV-OKID approach also can predict the response well in the oscillator experiment when the experiments have zero initial conditions, but it suffers from inaccuracy if the experiments have non-zero initial conditions as seen in Fig. 5b. In the case of fish and cart-pole, TV-OKID fails with the observer in the loop. We found that the identified open-loop Markov parameters predict the response well, but the prediction diverges from the truth when the observer is introduced, making the predictions useless. This observation further validates the hypothesis that the ARMA model cannot be explained by an observer in the loop system. Hence, we use only the estimated open-loop Markov parameters without the observer to show the performance of the TV-OKID prediction. The last q𝑞qitalic_q steps in OKID are ignored, as there is not sufficient data to calculate models for the last few steps, as discussed in Sec. 6.3. There is also the potential for numerical errors to creep in due to the additional steps taken in TV-OKID: determination of the time-varying Markov parameters from the time-varying observer Markov parameters, calculating the SVD of the resulting Hankel matrices and the calculation of the system matrices from these SVDs, as mentioned in [11]. On the other hand, the effort required to identify systems using the information-state approach is negligible compared to other techniques as the state-space model can be set up by just using the ARMA parameters. More examples can be found in [1], where the authors use the information-state model for optimal feedback control synthesis in complex nonlinear systems.
This paper describes a new system realization technique for the system identification of linear time-invariant as well as time-varying systems. The system identification method proceeds by modeling the current output of the system using an ARMA model comprising of the finite past outputs and inputs. A theory based on linear observability is developed to justify the usage of an ARMA model, which also provides the minimum number of inputs and outputs required from history for the model to fit the data exactly. The method uses the information-state, which simply comprises of the finite past inputs and outputs, to realize a state-space model directly from the ARMA parameters. This is shown to be universal for both linear time-invariant and time-varying systems that satisfy the observability assumption. Further, we show that feedback control based on the minimal information state is optimal for the underlying state space system, i.e., the information state is indeed a loss-less representation for the purpose of control. The method is tested on various systems in simulation, and the results show that the models are accurately identified.
The idea of using an ARMA model to describe the input-output data of an LTI system was first introduced in a series of papers related to the Observer/Kalman filter identification (OKID) algorithm [9, 18, 13], and the time-varying case was later considered in [11]. The credit for using an ARMA model for system identification goes to the authors of the papers mentioned above, however, the explanation for the ARMA parameters given in their work is not exact, and does not apply in general as we will show empirically. This section will summarize the OKID algorithm and discuss why the information-state approach is computationally much simpler and the theory discussed in Section 3 based on observability is the correct explanation for the ARMA parameters.
A
In many cases, the transmission process is the main bottleneck causing delays in edge inference, especially when the communication rate is low.
The extra feature extraction step in our method increases the complexity on the device side, but it effectively removes the task-irrelevant information and largely reduces the communication overhead.
While our method introduces additional complexity on the device side due to the complex feature extraction process, the proposed TOCOM-TEM method still enables low-latency inference.
In this paper, we develop a task-oriented communication framework for edge video analytics, which effectively extracts task-relevant features and reduces both the spatial and temporal redundancy in the feature domain.
Thus, it addresses the objective of reducing communication overhead by discarding task-irrelevant information.
A
The Connectome 1.0 human brain DW-MRI data used in this study is part of the MGH Connectome Diffusion Microstructure Dataset (CDMD)(Tian et al., 2022), which is publicly available on the figshare repository https://doi.org/10.6084/m9.figshare.c.5315474. MATLAB codes generated for simulation study, parameter fitting, and optimising b-value sampling is openly available at https://github.com/m-farquhar/SubdiffusionDKI.
The utility of diffusional kurtosis imaging for inferring information on tissue microstructure was described decades ago. Continued investigations in the DW-MRI field have led to studies clearly describing the importance of mean kurtosis mapping to clinical diagnosis, treatment planning and monitoring across a vast range of diseases and disorders. Our research on robust, fast, and accurate mapping of mean kurtosis using the sub-diffusion mathematical framework promises new opportunities for this field by providing a clinically useful, and routinely applicable mechanism for mapping mean kurtosis in the brain. Future studies may derive value from our suggestions and apply methods outside the brain for broader clinical utilisation.
The direct link between the sub-diffusion model parameter β𝛽\betaitalic_β and mean kurtosis is well established (Yang et al., 2022; Ingo et al., 2014, 2015). An important aspect to consider is whether mean β𝛽\betaitalic_β used to compute the mean kurtosis is alone sufficient for clinical decision making. While benefits of using kurtosis metrics over other DW-MRI data derived metrics in certain applications are clear, the adequacy of mean kurtosis over axial and radial kurtosis is less apparent. Most studies perform the mapping of mean kurtosis, probably because the DW-MRI data can be acquired in practically feasible times. Nonetheless, we can point to a few recent examples where the measurement of directional kurtosis has clear advantages. A study on mapping tumour response to radiotherapy treatment found axial kurtosis to provide the best sensitivity to treatment response (Goryawala et al., 2022). In a different study a correlation was found between glomerular filtration rate and axial kurtosis is assessing renal function and interstitial fibrosis (Li et al., 2022a). Uniplor depression subjects have been shown to have brain region specific increases in mean and radial kurtosis, while for bipolar depression subjects axial kurtosis decreased in specific brain regions and decreases in radial kurtosis were found in other regions (Maralakunte et al., 2022). This selection of studies highlight future opportunities for extending the methods to additionally map axial and radial kurtosis.
Instead of attempting to improve an existing model-based approach for kurtosis estimation, as has been considered by many others, we considered the problem from a different perspective. In view of the recent generalisation of the various models applicable to DW-MRI data (Yang et al., 2022), the sub-diffusion framework provides new, unexplored opportunities, for fast and robust kurtosis mapping. We report on our investigation into the utility of the sub-diffusion model for practically useful mapping of mean kurtosis.
For DKI to become a routine clinical tool, DW-MRI data acquisition needs to be fast and provides a robust estimation of kurtosis. The ideal protocol should have a minimum number of b-shells and diffusion encoding directions in each b-shell. The powder averaging over diffusion directions improves the signal-to-noise ratio of the DW-MRI data used for parameter estimation. Whilst this approach loses out on directionality of the kurtosis, it nonetheless provides a robust method of estimating mean kurtosis (Henriques et al., 2021), a metric of significant clinical value.
A
Variance of σ02superscriptsubscript𝜎02\sigma_{0}^{2}italic_σ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT and γ02superscriptsubscript𝛾02\gamma_{0}^{2}italic_γ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT
\mathbf{{G}}}^{\prime}_{j}}\right\|^{2}_{\mathrm{F}}start_OVERACCENT ( italic_a ) end_OVERACCENT start_ARG ≤ end_ARG ∥ bold_F start_POSTSUPERSCRIPT roman_H end_POSTSUPERSCRIPT bold_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT roman_F end_POSTSUBSCRIPT + ∥ bold_F start_POSTSUPERSCRIPT roman_H end_POSTSUPERSCRIPT bold_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT roman_F end_POSTSUBSCRIPT
\mathrm{T}}\in\mathbb{C}^{(M+1)\times 1}bold_italic_θ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ≜ [ bold_italic_θ start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT , italic_t ] start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT ∈ blackboard_C start_POSTSUPERSCRIPT ( italic_M + 1 ) × 1 end_POSTSUPERSCRIPT
Indoor region size (m3superscriptm3\mathrm{m}^{3}roman_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT)
\mathrm{F}}= ∥ bold_F start_POSTSUPERSCRIPT roman_H end_POSTSUPERSCRIPT roman_Δ bold_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT - bold_F start_POSTSUPERSCRIPT roman_H end_POSTSUPERSCRIPT roman_Δ bold_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT roman_F end_POSTSUBSCRIPT
C
By (2) and (3), the spatial temperature profiles are omitted and a coherent temperature profile between all nodes and edges is ensured, see also e.g. Krug et al. (2021).
We regard the internal energy of water as the main energy carrier and neglect other energy forms. Furthermore, as in Machado et al. (2022), we assume a linear dependency between the internal energy and the temperature of water.
The power-to-heat (P2H) connection of the two layers is implemented by heat pumps that couple nodes from the electrical layer with edges from the thermal layer.
Typically, the dynamics of the electrical layer and the heat pumps are fast compared to the thermal layer,
The thermal edges, i.e., the simple pipes and heat exchanger are modeled as pipes transporting water as thermal energy carrier that exchanges heat flow with its environment, due to thermal losses, heat injection or extraction.
D
Step 4: Combine subproblems’ solutions to establish a valid upper bound for (29). Evaluate the bound performance by measuring the gap between lower and upper bounds.
Table 1 reports the optimality gap and the computation time of Step 3 after one iteration, which is the most time-consuming component in the proposed method. The results demonstrate the consistent performance of our approach across different settings. Using the multipliers obtained in Step 2 without further updates, we can achieve a tight upper bound with an optimality gap of approximately 3%, indicating that a near-optimal solution to (29) is attained. In contrast, the benchmark method cannot provide an accurate estimation of the unknown globally optimal solution. A major reason is that in the benchmark approach, the complementarity constraint has to be first linearized using the big-M method and then dualized to ensure decomposability. However, this process introduces strong linearity in the relaxed problem, which tends to produce extreme solutions that compromise the quality of the derived bound. As a result, the Lagrangian relaxation in the benchmark approach only yields a trivial upper bound with up to 80% optimality gap, providing little insight into the problem’s true complexity. Our proposed method, however, circumvents the need to dualize the complementarity constraint by employing appropriate relaxations based on the inherent characteristics of the problem. By doing so, we simplify the complex model into a more tractable form with favorable structures, while still capturing the essential features of the problem. Importantly, the complementarity constraint remains respected in the relaxed subproblem, allowing us to derive a significantly tighter upper bound compared to the benchmark approach. Besides, the average computation time to optimally solve each subproblem in the proposed method is less than 2 minutes. We emphasize that this solving time is satisfactorily short for an infrastructure planning problem that does not require real-time computation. In fact, the computation time is orders of magnitude shorter than the implementation time of a deployment plan (e.g., in the range of months or years), rendering it insignificant for the planning purpose. This comprehensive evaluation confirms the effectiveness and efficiency of our proposed approach.
We establish a tight upper bound for the joint deployment problem despite its nonconcavity. A decomposable problem is developed through proper model relaxations. By leveraging the favorable structures of the relaxed problem, we are able to obtain an accurate estimation of the globally optimal solution to the original problem, enabling us to verify the optimality of the solution obtained. We show that our approach provides a high-quality upper bound with an optimality gap of around 3%.
Step 5: Terminate the procedure if the optimality gap is satisfactorily tight. Otherwise, update the multipliers according to (43c) and go to Step 3.
where ζ𝜁\zetaitalic_ζ is the step size. Due to the model relaxation, the established upper bound is an overestimation of the globally optimal solution to the original problem (29). In other words, the bound is also a theoretical upper bound for the original problem, which allows us to quantify the optimality of its solution. We summarize the derivation procedure as follows:
C
2) Image quality indicator: As shown in Fig. 2 e, it demonstrates that DEviS can serve as an indicator for representing the quality of medical images. Uncertainty estimation is an intuitive and quantitative way to inform clinicians or researchers about the quality of medical images. DEviS guides image quality quantitatively through the distribution of uncertainty values and qualitatively through the degree of explicitness of the uncertainty map. Furthermore, our developed UAF module aids in the initial screening of low-quality and high-quality data. High-quality data can be directly employed in clinical practice, while low-quality data necessitates expert judgment before utilization.
6) FIVES dataset. In the second application, the Fundus Image Vessel Segmentation (FIVES) dataset is used for the quality indicator. In the FIVES dataset, each image was evaluated for four qualities: normal, lighting and color distortion, blurring, and low-contrast distortion. In this experiment, we define normal images as high-quality data and images under other conditions as low-quality data. During the experimental process, DEviS was initially trained on the FIVES dataset, which comprises 300 slices of high-quality images. Subsequently, the performance of DEviS was evaluated on a mixed dataset from FIVES, consisting of 300 slices comprising both high and low-quality images. This mixed dataset comprised 159 high-quality slices and 141 slices of low-quality images. Throughout both the training and testing stages, each case was consistently adjusted to dimensions of 565×584565584565\times 584565 × 584 voxels, ensuring uniformity across the dataset.
2) Image quality indicator: As shown in Fig. 2 e, it demonstrates that DEviS can serve as an indicator for representing the quality of medical images. Uncertainty estimation is an intuitive and quantitative way to inform clinicians or researchers about the quality of medical images. DEviS guides image quality quantitatively through the distribution of uncertainty values and qualitatively through the degree of explicitness of the uncertainty map. Furthermore, our developed UAF module aids in the initial screening of low-quality and high-quality data. High-quality data can be directly employed in clinical practice, while low-quality data necessitates expert judgment before utilization.
We conducted OOD experiments on the Johns Hopkins OCT dataset and Duke OCT dataset with Diabetic Macular Edema (DME). As shown in Fig. 6 a, we first observed a slight improvement in results for mixed ID and OOD data after using DEviS. Then, we found significant differences in the performance of the segmentation between the with or without UAF. Additionally, there were also marked differences in the distribution of uncertainty between the ID and OOD data, especially adding the UAF module as shown in Fig. 6 b. As depicted in Fig. 6 c (i), We then employed Uniform Manifold Approximation and Projection (UMAP) to visually assess the integration of our method. In the spatial clustering results of the base network framework, we observed overlapping of ID and OOD data batches. However, after integrating DEviS, we observed improved batch-specific separation of ID and OOD data, particularly for the ID data. Furthermore, the integration of UAF with DEviS effectively eliminated the OOD, resulting in a more pronounced batch effect. Additionally, we first presented the uncertainty estimation map corresponding to UMAP in the Fig. 6 c (ii). It is evident from the map that the boundary region between different batches exhibits significantly higher uncertainty. More intuitively, the segmentation results and uncertainty maps of ID and OOD data can refer to Fig. 8 a. These results combine to show that DEviS with UAF provides a solution for filtering out abnormal areas where lesions may be present in OOD data.
In what follows, we apply DEviS with UAF to indicate the quality of data for real-world applications. The FIVES datasets are used for quality assessment experiments. We initially classified samples into three categories based on their quality labels: high quality, high & low quality, and low quality. We observed distinct performance variations among these categories (Fig. 7 a (i)). To further demonstrate its ability to indicate image quality, we delved into a combination of high and low-quality data to filter out high-quality data. Before the application of UAF, we identified 159 high-quality and 141 low-quality data samples. Upon implementing UAF, the distribution shifted, resulting in 153 high-quality and 61 low-quality data samples. This transition led to a remarkable increase in the proportion of high-quality data from 53% to 71%. Notably, the task at hand posed a greater challenge in assessing data quality compared to the detection of OOD data, as all data sources originated from the same distribution. We also found a significant performance boost with UAF in Dice and ECE metrics. (Fig. 7 a (ii)). Additionally, we investigated the distribution of uncertainty to discern differences between different qualities data (Fig. 7 b (i)). Moreover, the uncertainty distribution of high and low mixed quality with UAF was closer to the low-quality data (Fig. 7 b (ii)). The spatial clustering results of mixed-quality images were visualized using UMAP in the Fig. 7 c. Prior to incorporating our algorithm, some batch-specific separation was observed, albeit with partially overlapping regions (Fig. 7 c (i) 1st and 4th columns). However, upon integrating DEviS with UAF, a slight batch effect was observed (Fig. 7 c (i) 2nd, 3rd, 5th and 6th columns). Additionally, the UMAP visualization with uncertainty map exhibited uncertainty warnings for partially overlapping points, with noticeably high uncertainties along the edges of prediction errors (Fig. 7 c (ii) (1, 2)). Moreover, the segmentation results and uncertainty map of low-quality and high-quality images exhibited in Fig. 8 b, providing a more intuitive representation of the quality disparity. These results demonstrate that DEviS with UAF can serve as an image quality indicator to fairly value personal data in healthcare and consumer markets. This would help to remove harmful data while identifying and collecting higher-value data for diagnostic support.
D
IF=IF++IF⁢pI_{F}=\hskip 2.0ptI_{F}+\!\!\!+I_{Fp}italic_I start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT = italic_I start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT + + italic_I start_POSTSUBSCRIPT italic_F italic_p end_POSTSUBSCRIPT
0.95∗superscript0.95\textbf{0.95}^{*}0.95 start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT |||| 3.7 %∗superscript3.7 %\textbf{3.7 \%}^{*}3.7 % start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT
0.98∗superscript0.98\textbf{0.98}^{*}0.98 start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT |||| 2.2 %∗superscript2.2 %\textbf{2.2 \%}^{*}2.2 % start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT
0.96∗superscript0.96\textbf{0.96}^{*}0.96 start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT |||| 3.2 %∗superscript3.2 %\textbf{3.2 \%}^{*}3.2 % start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT
,i_{Op}^{1},i_{Op}^{2},i_{Op}^{3}\}{ italic_i start_POSTSUBSCRIPT italic_S italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , italic_i start_POSTSUBSCRIPT italic_S italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , italic_i start_POSTSUBSCRIPT italic_S italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT , italic_i start_POSTSUBSCRIPT italic_S italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT , italic_i start_POSTSUBSCRIPT italic_M italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , italic_i start_POSTSUBSCRIPT italic_M italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , italic_i start_POSTSUBSCRIPT italic_M italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT , italic_i start_POSTSUBSCRIPT italic_O italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , italic_i start_POSTSUBSCRIPT italic_O italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , italic_i start_POSTSUBSCRIPT italic_O italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT }. The three IBIs in IC⁢psubscript𝐼𝐶𝑝I_{Cp}italic_I start_POSTSUBSCRIPT italic_C italic_p end_POSTSUBSCRIPT that minimize the absolute error are chosen as shown in Equation (4) and are concatenated into the final IBIs sequence, IFsubscript𝐼𝐹I_{F}italic_I start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT. After iterating q𝑞qitalic_q segments to obtain the complete IFsubscript𝐼𝐹I_{F}italic_I start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT, feasible optimized solutions are regarded as final estimated IBIs from motion-contaminated PPG.
C
To further validate the effectiveness and reliability of the system, we deployed the system at both the transmitting and receiving end and conducted real-channel image transmission using hardware. As shown in Fig. 9, YunSDR Y750111Introduction website : https://www.v3best.com/y750s devices were used at both the transmitting and receiving ends, with the bitstream employing OFDM modulation and a bpp parameter of 0.1. The measured SNR of the wireless channel was approximately 0dB. After completing the transmission, the average performance metrics of the received images are presented in Table V.
At a bpp value of 0.1 and an SNR of around 0, the image metrics obtained from the hardware experiment exhibit fluctuations around the results obtained from software simulation. In such low SNR scenarios, our STSCI still performs well both in terms of image metrics and visual effects. The average values are slightly lower but very close to the results from the software simulation.
These results demonstrate that STSCI is capable of performing well in real hardware deployment and transmitting over real channels. It also confirms the reliability of the software simulation results obtained earlier.
Meanwhile, Fig. 10 provides a visual example of hardware transmission along with its corresponding image metrics. According to Fig. 10, even at SNR around 0dB, the image metrics of the final image are still relatively high, without significant distortion or deformation. In contrast to the blurry and unclear version of the dial without enhancement, the enhanced version maintains clear visibility of the pointers and readings on the dial.
To further validate the effectiveness and reliability of the system, we deployed the system at both the transmitting and receiving end and conducted real-channel image transmission using hardware. As shown in Fig. 9, YunSDR Y750111Introduction website : https://www.v3best.com/y750s devices were used at both the transmitting and receiving ends, with the bitstream employing OFDM modulation and a bpp parameter of 0.1. The measured SNR of the wireless channel was approximately 0dB. After completing the transmission, the average performance metrics of the received images are presented in Table V.
A
Note that this statement holds under condition (30), which implies that the received power of the desired source is stronger than the received power of each interference source, considering the attenuation stemming from the activity duration.
The proofs of Proposition 1 and Proposition 2 rely on the following lemma, which is important in its own right.
Third, following the same techniques in the proof of Proposition 1 and Proposition 2, similar results are derived for an alternative definition of the SIR: SIRtot⁢(𝚪)≡𝒅0H⁢𝚪⁢𝒅0∑j=1NI𝒅jH⁢𝚪⁢𝒅jsubscriptSIRtot𝚪superscriptsubscript𝒅0𝐻𝚪subscript𝒅0superscriptsubscript𝑗1subscript𝑁Isuperscriptsubscript𝒅𝑗𝐻𝚪subscript𝒅𝑗\text{SIR}_{\text{tot}}(\mathbf{\Gamma})\equiv\frac{{\bm{d}}_{0}^{H}\mathbf{%
Since we established that the Riemannian approach is better than the Euclidean one in terms of the SIR in Proposition 1, Proposition 2 implies that increasing the SNR further increases the gap between the two approaches. Nevertheless, it also indicates that the performance of the Riemannian approach in terms of the SIR is more sensitive to noise compared to the Euclidean counterpart.
Similarly to Proposition 1, the following Proposition 3 examines the performance in terms of the SIR defined in (43). Here, assumptions 2-4 are not required, and therefore, the ATFs of the interference sources could be correlated, and the number of sources is not limited by the number of microphones in the array.
A
The remainder of this manuscript is organized as follows. Section II includes an overview of related works from the literature on breathing anomaly detection using various sensing technologies and machine learning. Section III describes various human breathing patterns from the literature to be used as breathing classes for anomaly detection. Section IV presents the system model, relevant theory and lock-in detection process used in this study. The details of hardware components, data collection and initial data processing are depicted in Section V. Next, Section VI describes the handcrafted features used and their extraction process. Data classification process using the chosen machine learning algorithms are included in Section VII and the results along with their interpretations are discussed in Section VIII. Finally, Section IX presents the conclusions drawn from the whole effort and forecasts future research directions.
Some past classification efforts involved one-class classification or outlier detection, as in [30] where the model was trained using human breathing data in resting condition to predict if the person was exercising in new examples. Binary classification between normal breathing and apnea were performed in [29] to detect obstructive sleep apnea. Multiclass breathing classification efforts considered different types of breathing anomalies like tachypnea, bradypnea, hyperpnea, hypopnea etc. and sometimes more complicated anomalies like Cheyne-Stoke’s, Biot’s and Apneustic breathing as separate classes [24, 32, 31]. Most of these breathing patterns are explained in Section III. Data for these efforts were usually obtained from human volunteers who are generally unable to breathe using precise frequency, amplitude and pattern. Occasionally, data from patients with breathing disorders were utilized, but this approach had its limitations as well. This is because even the patients may not consistently exhibit abnormal breathing patterns which increases the risk of mislabeling the training data. In the current study, more reliable data were generated by using a programmable robot with precise human-like breathing capability. Various machine learning techniques were employed in the literature to classify breathing data, including decision tree, random forest, support vector machine, XGBoost, K𝐾Kitalic_K-nearest neighbors, feedforward neural network, and logistic regression, among others. The performance of these models was assessed using different evaluation metrics such as confusion matrices, K𝐾Kitalic_K-fold cross-validation, accuracy, precision, sensitivity (recall), specificity, F1-score etc. [31, 29, 24, 11, 19].
Feature extraction is an important step in machine learning-based data classification. After detrending, four handcrafted features were extracted from the collected data using MATLAB code for the following three cases:
Researchers have been applying machine learning and deep learning techniques on human respiration data collected through various technologies for anomaly detection. Most of these efforts made use of handcrafted features to perform breathing data classification for anomaly detection. Some of the common categories of features used in the literature were statistical features from the data (mean, standard deviation, skewness, kurtosis, root mean-square value, range etc.), signal-processing based features (Fourier co-efficients, autoregressive integrated moving average co-efficients, wavelet decomposition, mel-frequency cepstral coefficients, linear predictive coding etc.), and respiration related features (breathing rate, amplitude, inspiratory time, expiratory time etc.) [28, 29, 30, 11, 31, 24]. In some research efforts, deep neural networks were trained to recognize subtle features from breathing data before classification, thus making manual feature extraction redundant [19, 32, 9, 17, 26].
The features for each data were saved in separate rows in CSV files along with the class label for each row. Thus, labeled features were prepared for the subsequent classification task. The details of extracted handcrafted features are provided as follows.
C
\mathsf{UE}}_{k}}}(\widehat{g}^{(i)}_{km})^{*}w^{\mathsf{u}}_{m}.over¯ start_ARG italic_x end_ARG start_POSTSUPERSCRIPT sansserif_u end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_m ∈ caligraphic_M start_POSTSUPERSCRIPT sansserif_UE end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( over^ start_ARG italic_g end_ARG start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k italic_m end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT italic_y start_POSTSUPERSCRIPT sansserif_u end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_m ∈ caligraphic_M start_POSTSUPERSCRIPT sansserif_UE end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ caligraphic_K end_POSTSUBSCRIPT ( over^ start_ARG italic_g end_ARG start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k italic_m end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT italic_g start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_m end_POSTSUBSCRIPT square-root start_ARG italic_ρ start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT end_ARG italic_x start_POSTSUPERSCRIPT sansserif_u end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT + ∑ start_POSTSUBSCRIPT italic_m ∈ caligraphic_M start_POSTSUPERSCRIPT sansserif_UE end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( over^ start_ARG italic_g end_ARG start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k italic_m end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT italic_w start_POSTSUPERSCRIPT sansserif_u end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT .
Based on (5) and the formulation in [18], the effective uplink signal to interference plus noise ratio (SINR) of user k𝑘kitalic_k is given by
For uplink transmission, each user k𝑘kitalic_k transmits a data signal xk𝗎subscriptsuperscript𝑥𝗎𝑘x^{\mathsf{u}}_{k}italic_x start_POSTSUPERSCRIPT sansserif_u end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT.
From Theorem 18, we conclude that learning based on our Markov games model is equivalent to performing the pilot update which minimizes the interference due to PC at each near-RT PA.
The received signal y¯k𝖽subscriptsuperscript¯𝑦𝖽𝑘\bar{y}^{\mathsf{d}}_{k}over¯ start_ARG italic_y end_ARG start_POSTSUPERSCRIPT sansserif_d end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT for user k𝑘kitalic_k is then given by
A
\mathsf{H}}\mathbbm{t}_{k}]\right)roman_ℜ ( italic_b start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( blackboard_T ) ) = roman_ℜ ( sansserif_E [ blackboard_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT sansserif_H end_POSTSUPERSCRIPT blackboard_t start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ] ) is linear, convexity of the reformulated SINR constraints readily follows. We omit the proof for the convexity of the objective and power constraints, since it is trivial. Finally, repeated applications of Cauchy-Schwarz inequality prove that all the aforementioned functions are also proper functions.
is readily given by combining Lemma 3, Lemma 4, Lemma 5, and by noticing that the unique solution 𝕋′superscript𝕋′\mathbbm{T}^{\prime}blackboard_T start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT to Problem (32) is also a solution to Problem (10) (note: the converse does not hold in general).
Let 𝛌⋆superscript𝛌⋆\bm{\lambda}^{\star}bold_italic_λ start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT be a solution to Problem (14). Then, a solution to Problem (10) is given by any solution to
Problem (32) admits a unique solution 𝕋′∈𝒯superscript𝕋′𝒯\mathbbm{T}^{\prime}\in\mathcal{T}blackboard_T start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ caligraphic_T. Furthermore, strong duality holds for Problem (32), i.e., Problem (33) and Problem (32) have the same optimum, and there exist Lagrangian multipliers (𝛌′,𝛍′)superscript𝛌′superscript𝛍′(\bm{\lambda}^{\prime},\bm{\mu}^{\prime})( bold_italic_λ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , bold_italic_μ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) solving Problem (33).
The next simple lemma can be used to relate Problem (10) to Problem (32), following a similar idea in [30, 23].
D
In this subsection, we first obtain an estimate (A^,B^)^𝐴^𝐵(\hat{A},\hat{B})( over^ start_ARG italic_A end_ARG , over^ start_ARG italic_B end_ARG ) offline from measured data of the unknown real system (2), and then synthesize a controller (2.1) with zero terminal matrix P=0𝑃0P=0italic_P = 0. This is the classical receding-horizon LQ controller [10].
There are many recent studies on linear system identification and its finite-sample error bounds [22, 23, 18].
In this work, the obtained bounds hold regardless of whether εmsubscript𝜀m\varepsilon_{\mathrm{m}}italic_ε start_POSTSUBSCRIPT roman_m end_POSTSUBSCRIPT and εpsubscript𝜀p\varepsilon_{\mathrm{p}}italic_ε start_POSTSUBSCRIPT roman_p end_POSTSUBSCRIPT are coupled. The presence of coupling, e.g., εp=h⁢(εm)subscript𝜀pℎsubscript𝜀m\varepsilon_{\mathrm{p}}=h(\varepsilon_{\mathrm{m}})italic_ε start_POSTSUBSCRIPT roman_p end_POSTSUBSCRIPT = italic_h ( italic_ε start_POSTSUBSCRIPT roman_m end_POSTSUBSCRIPT ) for some function hℎhitalic_h, can be easily incorporated by plugging function hℎhitalic_h into the bound g𝑔gitalic_g. In addition, the error bounds obtained will also depend on the system matrices A⋆subscript𝐴⋆A_{\star}italic_A start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT and B⋆subscript𝐵⋆B_{\star}italic_B start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT. To simplify the algebraic expressions of the bounds, we upper bound the system matrices as
In this work, the true model (A⋆,B⋆)subscript𝐴⋆subscript𝐵⋆(A_{\star},B_{\star})( italic_A start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT , italic_B start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT ) is unknown, and we only have access to an approximate model (A^,B^)^𝐴^𝐵(\hat{A},\hat{B})( over^ start_ARG italic_A end_ARG , over^ start_ARG italic_B end_ARG ) that differs from the true model with an error: ‖A^−A⋆‖⩽εmnorm^𝐴subscript𝐴⋆subscript𝜀m\|\hat{A}-A_{\star}\|\leqslant\varepsilon_{\mathrm{m}}∥ over^ start_ARG italic_A end_ARG - italic_A start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT ∥ ⩽ italic_ε start_POSTSUBSCRIPT roman_m end_POSTSUBSCRIPT and ‖B^−B⋆‖⩽εmnorm^𝐵subscript𝐵⋆subscript𝜀m\|\hat{B}-B_{\star}\|\leqslant\varepsilon_{\mathrm{m}}∥ over^ start_ARG italic_B end_ARG - italic_B start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT ∥ ⩽ italic_ε start_POSTSUBSCRIPT roman_m end_POSTSUBSCRIPT for some εm⩾0subscript𝜀m0\varepsilon_{\mathrm{m}}\geqslant 0italic_ε start_POSTSUBSCRIPT roman_m end_POSTSUBSCRIPT ⩾ 0. This approximate model and its error bound can be obtained, e.g., from recent advances in linear system identification [22, 23].
where the regret is linear in T𝑇Titalic_T. This observation matches the result in [34], where the regret of a linear unconstrained RHC controller, with a fixed prediction horizon and an exact system model, is linear in T𝑇Titalic_T. This linear regret is caused by the fact that even if the model is perfectly identified, the RHC controller still deviates from the optimal LQR controller due to its finite prediction horizon.
A
Remark. Since the policy (7) is conditioned on a partial observation 𝒐ksubscript𝒐𝑘{\bm{o}}_{k}bold_italic_o start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT of the state 𝒔ksubscript𝒔𝑘{\bm{s}}_{k}bold_italic_s start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT, the stationary MDP we have defined in this section is, in fact, a partially observable MDP (POMDP). In this case, it is known that the globally optimal policy depends on a summary of the history of past observations and actions, 𝒉k={𝒐1,𝒂1,…,𝒐k}subscript𝒉𝑘subscript𝒐1subscript𝒂1…subscript𝒐𝑘{\bm{h}}_{k}=\{{\bm{o}}_{1},{\bm{a}}_{1},\dots,{\bm{o}}_{k}\}bold_italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = { bold_italic_o start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , bold_italic_o start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT }, rather than just the current observation 𝒐ksubscript𝒐𝑘{\bm{o}}_{k}bold_italic_o start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT (Kaelbling et al., 1998). However, policies formulated based on an incomplete summary of 𝒉ksubscript𝒉𝑘{\bm{h}}_{k}bold_italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT are common in practice and still achieve good results (Sutton & Barto, 2018). We therefore pursue this approach in the present paper, and leave for future work testing the generalization of our policy input to a more complete summary of 𝒉ksubscript𝒉𝑘{\bm{h}}_{k}bold_italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT. We also note that policy gradient methods, which PPO belongs to, do not require the Markov property of the state (that is, conditional independence of future states on past states given the present state) and can therefore be readily applied to the POMDP setting. For our problem, this guarantees that the PPO algorithm will converge to a locally optimum policy.
In this paper, we have introduced the reinforcement learning reduced-order estimator (RL-ROE), a new state estimation methodology for parametric PDEs. Our approach relies on the construction of a computationally inexpensive reduced-order model (ROM) to approximate the dynamics of the system. The novelty of our contribution lies in the design, based on this ROM, of a reduced-order estimator (ROE) in which the filter correction term is given by a nonlinear stochastic policy trained offline through reinforcement learning. We introduce a trick to translate the time-dependent trajectory tracking problem in the offline training phase to an equivalent stationary MDP, enabling the use of off-the-shelf RL algorithms. We demonstrate using simulations of the Burgers and Navier-Stokes equations that in the limit of very few sensors, the trained RL-ROE vastly outperforms a Kalman filter designed using the same ROM, which is attributable to the nonlinearity of its policy (see Appendix I for a quantification of this nonlinearity). Finally, the RL-ROE also yields accurate high-dimensional state estimates for ground-truth trajectories corresponding to various parameter values without direct knowledge of the latter.
The RL-ROE exhibits robust performance across the entire parameter range μ∈[0,1]𝜇01\mu\in[0,1]italic_μ ∈ [ 0 , 1 ], including when estimating trajectories corresponding to previously unseen parameter values. Finally, Figure 4 (right) displays the average over time and over μ𝜇\muitalic_μ of the normalized L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error for varying number p𝑝pitalic_p of sensors. Note that each value of p𝑝pitalic_p corresponds to a separately trained RL-ROE. As the number of sensors increases, the KF-ROE performs better and better until its accuracy overtakes that of the RL-ROE. We hypothesize that the accuracy of the RL-ROE is limited by the inability of the RL training process to find an optimal policy, due to both the non-convexity of the optimization landscape as well as shortcomings inherent to current deep RL algorithms. This being said, the strength of the nonlinear policy of the RL-ROE becomes very clear in the very sparse sensing regime; its performance remains remarkably robust as the number of sensors reduces to 2 or even 1. In Appendix F, spatio-temporal contours (similar as in Figure 3) of the ground-truth solution and corresponding estimates for p=2𝑝2p=2italic_p = 2 and 12121212 illustrate that the slight advantage held by the KF-ROE for p=12𝑝12p=12italic_p = 12 is reversed into clear superiority of the RL-ROE for p=4𝑝4p=4italic_p = 4.
A big challenge is that ROMs provide a simplified and imperfect description of the dynamics, which negatively affects the performance of the state estimator. One potential solution is to improve the accuracy of the ROM through the inclusion of additional closure terms (Ahmed et al., 2021). In this paper, we leave the ROM untouched and instead propose a new design paradigm for the estimator itself, which we call a reinforcement-learning reduced-order estimator (RL-ROE). The RL-ROE is constructed from the ROM in an analogous way to a Kalman filter, with the crucial difference that the linear filter gain function, which takes in the current measurement data, is replaced by a nonlinear policy trained through reinforcement learning (RL). The flexibility of the nonlinear policy, parameterized by a neural network, enables the RL-ROE to compensate for errors of the ROM while still taking advantage of the imperfect knowledge of the dynamics. Indeed, we show that in the limit of sparse measurements, the trained RL-ROE outperforms a Kalman filter designed using the same ROM and displays robust estimation performance across different dynamical regimes. To our knowledge, the RL-ROE is the first application of RL to state estimation of parametric PDEs.
We evaluate the state estimation performance of the RL-ROE for systems governed by the Burgers equation and Navier-Stokes equations. For each system, we first compute various solution trajectories corresponding to different physical parameter values, which we use to construct the ROM and train the RL-ROE following the procedure outlined in Section 2.4. The trained RL-ROE is finally deployed online and compared against a time-dependent Kalman filter constructed from the same ROM, which we refer to as KF-ROE. The KF-ROE is given by equations (4a) and (5), with the calculation of the time-varying Kalman gain detailed in Appendix C of the supplementary materials.
D
The massive presence of networked systems in many areas is making distributed optimization more and more attractive for a wide range of tasks.
convergence of the network systems to a steady-state configuration corresponding to a stationary point of the problem.
These tasks often involve dynamical systems (e.g., teams of robots or electric grids) that need to be controlled while optimizing a cost index.
The massive presence of networked systems in many areas is making distributed optimization more and more attractive for a wide range of tasks.
In [19], algebraic systems are controlled by relying on gradient information affected by random errors. As for feedback optimization in multi-agent systems, the early reference [20] proposes an approach based on saddle point flows, while [21] addresses a partition-based scenario.
B
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
1

Collection including liangzid/robench2024b_all_seteessSCP-p