context
stringlengths 100
5.69k
| A
stringlengths 100
3.76k
| B
stringlengths 100
3.61k
| C
stringlengths 100
5.61k
| D
stringlengths 100
3.87k
| label
stringclasses 4
values |
---|---|---|---|---|---|
Treatment was considered only for agents for whom treatment had not failed earlier given that with DAAs, the probability of treatment failure is low [Chugh et al., 2019]. DAAs are the current standard of care in India, as in much of the rest of the world. Chugh et al. [2019] estimated real-world cure or SVR rates for DAAs from their experience of treating HCV patients in Indian Punjab. The authors found that the pan-genotypic combination treatment of DAAs sofosbuvir (SOF) and velpatasvir (VEL) yielded the highest SVR rates, and hence we used SOF+VEL as the standard of care in our simulation experiments, with genotype-specific SVR rates (ranging from 84% to 86%) sourced from Chugh et al. [2019].
|
In this study, we evaluate multiple treatment models or approaches towards the treatment and management of HCV. These involve varying the timing and frequency of treatment - for example, running treatment camps once every 3 years, or at the beginning of the intervention period versus the end of the intervention period. Each of these alternate treatment models are compared against the default model that represents the status quo and is referred to henceforth as the annual-treatment model. This treatment model, under which an increasing proportion of infected agents are treated every year (hence the name), represents the status quo in the following way. We assume that, based on inputs from our clinical collaborator with field experience in managing HCV treatment in a high-prevalence area, beginning from a base value of the proportion of the infected cohort who receive treatment (capturing the impact of initial awareness regarding HCV), public awareness regarding HCV increases every year. The number of infected agents who receive treatment via implementation of the default annual-treatment model over the intervention period is controlled by the target uptake rate. For example, if the target uptake rate is set to be 10%, then our implementation of annual-treatment attempts to ensure that 10% of all infected agents present from the beginning of the intervention period (including those infected during the calibration period) to the end of the intervention period are treated.
|
We now discuss outcomes that quantify the extent to which incorporating transmissions impacts the cost-effectiveness of HCV treatment. These outcomes were obtained by stopping transmissions during the intervention period of a given simulation run. This involved modifying the values of the transmission-related parameters - for example, the probability of an IDU influencing a non-IDU into becoming an IDU is set to zero. All other simulation settings are retained, and the outcomes from this simulation experiment are depicted in Figure 3. In Figure 3, we plot the ratios of (a) the NMBs from the WT to the WoT analyses, (b) the costs from the WoT to the WT analyses (for ease of comparison with the other ratio plots), and (c) the QALY terms from the NMB equation for the WT and the WoT cases. It is clear that in all cases, the ratios converge to 1 as the treatment uptake rate increases. The numerical outcomes corresponding to Figure 3 are provided in Table 5 below.
|
We now discuss how the treatment is applied to the agent cohort in the intervention period. First, we recall that the simulation execution period is divided into two stages: a calibration period of 50 years of simulation time during which the demographics of the agent cohort and the spread of the HCV infection reach levels corresponding to the validation targets (published real-world observations of HCV prevalence). A 10-year intervention period then begins during which treatment using DAAs is applied to a subset of infected agents and outcomes pertaining to HCV epidemiology and its health and cost impacts are recorded.
|
In addition, a dynamic agent cohort is incorporated, meaning agents enter and exit the model via birth and death during the simulation execution period. The simulation execution period was divided into two parts: the first 50 years of simulation time, called the calibration period, and then 10 years where the intervention (different treatment models with DAAs) was introduced and outcomes of interest were collected for analysis. The calibration period ensured that at the point in the simulation execution period when collection of outcomes of interest begins, demographics of the agent cohort and the spread of HCV in terms of its prevalence matched those documented in the literature relevant for validating the model. A 10 year intervention period was chosen as a time scale sufficient to capture impacts of different interventions at a population level. Further, daily time points were used so that interactions relevant to the spread of the disease could be captured with the appropriate granularity.
|
C
|
Gid(s)=((1−Dc)Vo−sLIL)(LCs2+L/Rs+(1−Dc)2)𝐺𝑖𝑑𝑠1𝐷𝑐subscript𝑉𝑜𝑠𝐿subscript𝐼𝐿𝐿𝐶superscript𝑠2𝐿𝑅𝑠superscript1𝐷𝑐2Gid(s)=\frac{((1-Dc)V_{o}-sLI_{L})}{(LCs^{2}+L/Rs+(1-Dc)^{2})}italic_G italic_i italic_d ( italic_s ) = divide start_ARG ( ( 1 - italic_D italic_c ) italic_V start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT - italic_s italic_L italic_I start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT ) end_ARG start_ARG ( italic_L italic_C italic_s start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_L / italic_R italic_s + ( 1 - italic_D italic_c ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) end_ARG
|
Similar approach is followed for the design of outer voltage loop. Close Loop Transfer Function (CLTF) of inner current loop in (3) is used to calculate the open loop gain Tv(s) for outer voltage loop , shown in Fig 4,where (4) represents the plant transfer function for output voltage to inductor current for boost converter.
|
Dc is duty cycle and Rs is sum of inductor ESR and source resistance. Using (1), as the plant model, bode plot method on loop gain as in (2) is used to come up with PI gains of the current controller. Bode plot for open loop gain of current control loop is shown in Fig. 3, where we can see that chosen current loop gains results in Phase Margin (PM) of 90 degrees and Bandwidth of 20KHz. 20KHz bandwidth for current controller is selected in model based on estimation from actual CE+T converter as in subsection A of section III.
|
For an approximate switching frequency of 30KHz (verified from CE+T) , CE+T’s current control loop have a bandwidth of 20KHz. A step response of current controller from both time domain simulation and CE+T are shown in Fig. 8 and Fig. 9 respectively, verifying the modeling of current control loop. BW from step response is estimated by modeling the current control as first order system’s time constant.
|
The chosen COTS converter is a multi port (1x AC, 2x DC) converter, which can be controlled in different ways. As we are interested in knowing the droop behavior so its modeling is simplified for the case when COTS converter is used as droop controlled DC bus. In this study, its other DC port is ignored and input AC side dynamics are also ignored. A DC/DC boost converter with dual loop control regulating with nominal voltage regulation of 350V is chosen as an equivalent switching model of CE+T converter, parameters are given in Table 1. We can choose any other topology of switching converter for equivalent model because switching and converter dynamics can be ignored for the characterization of control behavior of CE+T converter by assuming that fastest control loop is decoupled from switching harmonics (even average model is enough for control design and evaluation). Individual design of control loops is as follow and bode plots are used to verify the control design.
|
B
|
Following the European standard EN50388-2 for AC traction system, the responsibility to maintain system stability is shared between the converter manufacturers and the infrastructure operator in an organizational tractable manner by the compliances with two rules [14]. Similarly, in DC systems, it is desirable to design the converter control and the overall system integration in such a way that we are ensuring the passivity of the converter and system.
|
The grid is not allowed to have weakly-damped passive resonances below the frequency threshold, because converters are allowed to be non-passive there.
|
Equivalent converter impedances are strictly passive above a frequency threshold (87Hz in 16.6Hz railway; 300Hz in 50Hz applications).
|
“strictly passive” means that the complex converter impedance has positive real part for all frequencies above the threshold
|
The above two requirements from EN50388-2 together guarantee system stability and can be used as converter and system design guideline without knowledge of the complete system, because parallel connected systems preserve passivity in the respective frequency range. Also, when more and more passive converters are added into the system which are passive with certain margin, the aggregated behavior of all converters is still passive. The concept to ensure converter passivity and system passivity for AC traction system stability can also be applied to DC microgrid stability. The frequency threshold of 300Hz may not be true in DC microgrid and can be derived from rigorous analysis.
|
B
|
Key Events After the Emotion_* event, a KEY_* event is appended to indicate the key property. A total of 24 types (12 tonic notes with two modes each) are used in this work.
|
The proposed functional representation is designed based on REMI [1], a widely used event (token) based representation for symbolic music, but with different note and chord events assisting to model the emotion and key information better. See Fig. 2 for illustrations.
|
Bar, Sub-Beat and EOS Events The same as REMI, a BAR event is used when a new bar begins, a SUB-BEAT_* event points to one of 16 possible discrete locations in a bar, and an EOS event will end the whole lead sheet.
|
Key Events After the Emotion_* event, a KEY_* event is appended to indicate the key property. A total of 24 types (12 tonic notes with two modes each) are used in this work.
|
As the size of the dataset with emotion labels is not big enough, we pretrain the model with a large lead sheet dataset without emotion annotations to establish a robust understanding of relationship between melody and chord, where EMOTION_NONE is used as the emotion event. We then finetune the model on EMOPIA [5]
|
B
|
As both (6a) and (6c) are non-convex functions, the GA algorithm can converge to a local optimum; at attempt to mitigate the issue we use IPM. The direction of update in each step in IPM is calculated by considering both Hessian and gradient of (6a) [26] and hence, a better update direction is chosen. We use Matlab’s fmincon [27] to find a sub-optimal WD combiner using IPM with approximate Hessian and gradient. We emphasize that using IPM does not eliminate the possibility of converging to a local optimum.
|
Our simulation results show that both in Rayleigh fading and realistic 3GPP channels, our proposed GA-based algorithm performance is comparable with IPM and both can improve the performance of SIM antennas with respect to DPA with an equal number of RF chains in single-user and multi-user scenarios. But under equal aperture size, DPA outperforms SIM antenna.
|
Algorithm Efficiency: We begin by studying the efficiency of the proposed GA algorithm. Fig. 2(a) plots the results averaged over 1000 and for 10 randomly selected Rayleigh fading channel realizations. As shown, in all cases the major improvement is during the first 200 iterations and all cases converge to at least a local optimum. Also, Fig. 2(b) plots the achievable sum-rate of SIM using the GA and IPM algorithms for different numbers of layers. As shown, by increasing the number of layers, the achievable sum-rate of both GA and IPM algorithms experience an initial improvement and eventually saturates or experiences a drop. As the number of layers increases, the number of variables increases as well which results in a higher possibility for GA algorithm to converge to a poor local optimum. IPM, using the Hessian of the objective function, can find a better update direction toward the global optimum and escape smaller local optima. Also, when L𝐿Litalic_L increases, the interlayer distance decreases and reduced achievable sum-rate, leading to poor interconnectivity as the radiation pattern of each unit-cell is equivalent to a dipole antenna and is not isotropic. Furthermore, more RMTS layers in the SIM structure lead to higher losses in the system due to the insertion loss of each layer. Therefore, there is a sweet spot for L𝐿Litalic_L that leads to the best SIM performance.
|
In this section, we document the effectiveness of the proposed GA algorithm and IPM in calculating WD combiner for SIM antenna in single-user and multi-user uplink scenarios where the BS is in the far-field of user(s). Also, as upper and lower performance limits, we report the achievable sum-rate of DPA with an equal physical aperture area as SIM antenna, i.e., the averaged users’ signal power which reaches both antennas are the same, and with equal number of RF chains, i.e., equal power consumption333We ignore the power consumption of varactor diodes and driving circuitry of each RMTS unit-cell as being comparatively negligible..
|
M=K=1𝑀𝐾1M=K=1italic_M = italic_K = 1 RF chains is reported, as is the performance of the SIM with a simple MF combiner. We set α𝛼\alphaitalic_α to 1.81.81.81.8 in the GA algorithm. As shown, using IPM and GA algorithm, we gain a significant improvement in performance compared to using a simple MF combiner. Interestingly, using GA provides a higher sum-rate than using IPM in both Rayleigh fading and realistic 3GPP channels. In this case it appears that the flexibility in choosing the learning rate in each iteration helped GA algorithm in single-user cases to skip local optimums and find a closer-to-optimum combiner than IPM. Also, the achievable sum-rate of SIM antenna is lower than that of the DPA using maximum ratio combiner due to higher losses in the system (in comparison with DPA under equal aperture size constraint). However, when the number of RF chains is made equal, putting a SIM in front of a single antenna improves the uplink sum-rate of the user. Here, the SIM acts as a lens that focuses the received signal power by the user on its own receiver antenna.
|
C
|
Meanwhile, nonlinear devices can be either passive or active. IMD products of passive devices are called Passive Intermodulation (PIM). Among PIM sources are weak mechanical connections in the TX chain, kinks and sharp edges in conductors, duplexer filters or ferrite fillers, switches, metal oxide layers covering conductors and junctions, and dirt in connectors. Passive nonlinearity is mainly caused by nonlinear conductive and magnetic properties of devices inside the transceiver chain (internal PIM) or outside the system in the near-field [12] or transition antenna array region (external PIM, air-induced PIM) due to metal fences or billboards near the antenna array [4].
|
Meanwhile, nonlinear devices can be either passive or active. IMD products of passive devices are called Passive Intermodulation (PIM). Among PIM sources are weak mechanical connections in the TX chain, kinks and sharp edges in conductors, duplexer filters or ferrite fillers, switches, metal oxide layers covering conductors and junctions, and dirt in connectors. Passive nonlinearity is mainly caused by nonlinear conductive and magnetic properties of devices inside the transceiver chain (internal PIM) or outside the system in the near-field [12] or transition antenna array region (external PIM, air-induced PIM) due to metal fences or billboards near the antenna array [4].
|
PIM represents one of the major interference problems [13, 14, 15] in modern radio systems for service providers and equipment suppliers. PIM interference results in decreased coverage of BS cells, a decrease in the sensitivity of RX uplink (UL) signals, or possibly the complete inoperable transmission link.
|
Enhanced Mobile Broadband (EMBB) is a service defined by the 3rd Generation Partnership Project for 4G Long-Term Evolution (LTE) and 5G New Radio (NR) deployment to provide higher data rates for the end user [1]. To achieve this, EMBB utilizes such technologies as Multiple Input Multiple Output (MIMO) [2, 3], Orthogonal Frequency Division Multiplexing (OFDM), and Carrier Aggregation (CA) [4]. MIMO provides spatial signal diversity, OFDM provides frequency domain expansion, and CA allows flexible spectral resource allocation between different component carriers (CCs) of transmitted data [5]. In addition, LTE and NR specifications support the frequency division duplex (FDD) regime, where the transmitter (TX) and receiver (RX) operate simultaneously, occupying different frequency bands [6]. However, real base station (BS) hardware is non-ideal and has nonlinear properties [7, 8, 9, 10]. This is especially noticeable when non-contiguously aggregated downlink (DL) signals pass through shared nonlinearities, the intermodulation products are generated [11]. Some of them may fall into the RX band of the FDD system. All FDD transceivers have a duplexer between TX and RX chains, which protects the RX chain from intermodulation at the same frequency as TX CC. However, IMD products of CC interaction may fall into the RX band. Additionally, these products and products at other frequencies may affect surrounding systems working in frequency/time division duplexing modes as external sources of interference.
|
Despite the wide variety of PIM compensation methods [4, 16, 17, 18, 19], modern approaches do not allow one to simulate external PIM in the MIMO system physically. All known papers use a compensation model approach. In this case, the compensation model design follows the physical mechanism of external PIM generation with significant simplifications. Therefore, these research results are limited by real data measurements. Such kinds of measurement are not available for many researchers. Also, none of these methods provides a comprehensive process by which artificial interference caused by an external PIM source could be simulated directly, especially in an arbitrary MIMO system.
|
B
|
LLM. 111Replacing sentences 2–3 in prompt 4 from Table 2 with “You will be presented with an ASR transcription in json format with keys: text and low_confidence_words, where the text is the ASR transcription and low_confidence_words contains the list of words in the transcription with low confidence scores.
|
We summarize the results in terms of WER and CER for the original ASR and the LLM-corrected transcripts (relative
|
LLM. 111Replacing sentences 2–3 in prompt 4 from Table 2 with “You will be presented with an ASR transcription in json format with keys: text and low_confidence_words, where the text is the ASR transcription and low_confidence_words contains the list of words in the transcription with low confidence scores.
|
We then filter the ASR outputs that should be passed to the LLM based on the sentence-level or the lowest word-level confidence score in the
|
If you come across errors in ASR transcription, make sure that you correct only words from within the low_confidence_words list and your corrections should closely match the original transcription acoustically or phonetically.”
|
D
|
\deletedMoreover, current prototypes use solar power, further emphasizing that the bottleneck is downloading data from the microSD cards to free space on-board.
|
This description of the limitations of using SongBeam recorders to monitor corn buntings is likely to apply to many biomonitoring scenarios: solar power may allow for extending deployment lifetime in remote locations, but storage of recordings is a limiting factor.
|
SongBeam [40] microcontroller-based recorders have been used to monitor corn bunting (Emberiza calandra) birds in Oxfordshire, UK.
|
Oxfordshire Corn Buntings. This project-specific library contains recordings of corn buntings along a transect of approximately 20 kmkilometer\mathrm{km}roman_km in Southern England. Corn buntings sing in a mosaic-like pattern of geographical variation called dialects; our sample contains approximately 6 different dialects. The recordings were performed with directional parabolic microphones as described in [40].
|
The typical data pipeline with bio-acoustic research requires the deployment of sensors, in which each node is battery-powered and left in the field to record environmental sound, continuously, for a long period (e.g. for a whole season). Thereafter, the recorded data is manually collected by researchers on-site, from each node, and further analyzed in laboratory. In the case of avian species for instance, only the targeted bird species is relevant within the recorded data, and the rest of the data is to be discarded ultimately. This is both a cumbersome process and a waste of resources, not only at this downstream stage in the lab, but also upstream in the field, on each node, while recording large amounts of irrelevant data. In this pipeline, continuous recording of audio thus creates a bottleneck in terms of memory and energy budgets available on individual sensors – typically limited to a small battery, and an SD card, respectively, driven by rudimentary software running on a low-power microcontroller. Such hardware is very energy-efficient, but very limited in memory resources, with RAM memory budgets in the order of 500 kiloBytes [1], which yields specific constraints on software embedded on such devices [2].
|
A
|
The powerful SiST system CLASI can be applied to various scenarios to facilitate cross-lingual communications. For example, it can be deployed to various conferences or daily meetings to help listeners understand speech in different languages. It can also be deployed as a system-level translation module to help users watch videos that are conveyed in different languages. For online gaming, it can also help to bridge the gap of cross-lingual communication and connect people speaking different languages. A powerful SiST system with human parity performance may significantly improve the efficiency of professional human interpreters.
|
Despite the huge positive social impact that CLASI may bring, every coin has two sides. Neglecting some low-resource languages may also bring unfairness to some minorities. Resolving these problems needs further cooperation from the society. We leave more languages supporting as our future work.
|
Despite the huge positive social impact that CLASI may bring, every coin has two sides. Neglecting some low-resource languages may also bring unfairness to some minorities. Resolving these problems needs further cooperation from the society. We leave more languages supporting as our future work.
|
Despite the huge positive social impact that CLASI may bring, every coin has two sides. Neglecting some low-resource languages may also bring unfairness to some minorities. Resolving these problems needs further cooperation from the society. We leave more languages supporting as our future work.
|
Despite the huge positive social impact that CLASI may bring, every coin has two sides. Neglecting some low-resource languages may also bring unfairness to some minorities. Resolving these problems needs further cooperation from the society. We leave more languages supporting as our future work.
|
A
|
The need for stable and reliable control for Inverter Based Resources (IBRs) is immense with the increasing penetration of renewable energy sources, like solar and wind. The nature of traditional sources, consisting of large rotating machines, has a unique way of providing stability to the system through the inertia of their mass. This ensures that the interconnected power system can operate reliably, allowing control systems to account for disturbances. However, this phenomenon cannot be leveraged by inverter-based sources. Solar PV arrays cannot provide any mechanical inertia and wind turbines have intermittency in their rotations requiring AC-DC-AC conversion or other complex control systems. All this points to the fact that the growing share of IBRs causing a decrease in inertia in the system may lead to issues regarding the stability and reliability of the system.
|
Over the decades, major research has been conducted to develop the field of Grid Forming Inverters (GFMs). GFM inverters act as voltage sources and can stabilize their frequency, thus having the capability to operate in isolation. Virtual Synchronous Machines (VSM) and Droop control are two of the common methodologies. Both control logics imitate the physics of a real alternator, allowing an inverter to behave in a very similar fashion. These methods introduce virtual inertia to the system. While they have their strengths, emulating a complex physical system comes with its challenges. It requires high computational power, is prone to convergence issues and can introduce time delays due to the calibre of calculations.
|
The inverters are connected to the load via an RLC filter. The values of these filters depend on the ratio of their capacities for optimal power sharing. While this may not be necessary considering VRL, it is recommended to preserve the power-sharing characteristic. To validate the inverter system, a Hardware-in-the-Loop (HIL) setup using Typhoon HIL 404, Typhoon HIL SCADA and Oscilloscope is set up as seen in Figure 6. Due to the limitations of this setup, only two inverters were modelled. The oscilloscope is set up to show grid voltage and current, while the HIL-SCADA displays various other parameters: frequency, voltage, load and control loop values.
|
The need for stable and reliable control for Inverter Based Resources (IBRs) is immense with the increasing penetration of renewable energy sources, like solar and wind. The nature of traditional sources, consisting of large rotating machines, has a unique way of providing stability to the system through the inertia of their mass. This ensures that the interconnected power system can operate reliably, allowing control systems to account for disturbances. However, this phenomenon cannot be leveraged by inverter-based sources. Solar PV arrays cannot provide any mechanical inertia and wind turbines have intermittency in their rotations requiring AC-DC-AC conversion or other complex control systems. All this points to the fact that the growing share of IBRs causing a decrease in inertia in the system may lead to issues regarding the stability and reliability of the system.
|
Traditional inertia will no longer influence the stability of modern grids due to the increasing penetration of renewable and intermittent sources like PV and wind. Virtual oscillator control is an interesting approach for grid-forming inverters. This control methodology provides a quick initial response with inherent and accurate power sharing. Various architectures under the domain of VOC, like VdP, DZ, and AHO, have unique characteristics which fit them for particular situations. Voltage range limits may hinder the operating range of a VOC. In this paper, virtual oscillator control is used for inverter control along with a voltage recovery loop to surpass the restrictions due to voltage deviations. While the VRL can maintain voltage at a reference value, it requires extensive parameter tuning with an intricate reset-coordination system during anomalous events. A hysteresis-band-controlled current source inverter for a PV system is compatible with the DZ-VOC battery-inverter system. During load or generation fluctuations, the battery system acts immediately to compensate for any imbalances, provided it has the capacity. The results show an almost instantaneous response provided by the VOC control. Additionally, multiple VOC-operated inverters in parallel can share the load and synchronize with one another quickly and accurately without any external control. The robustness of VOC control, coupled with its numerous advantages will see significant use in the next generation of Grid Forming inverters.
|
A
|
In addition, such distributional evaluation should be realized using a small data set because obtaining data samples from high-fidelity simulators or real-world experiments is expensive and time-consuming.
|
To assess the aforementioned risks owing to the stochasticity of systems, it is important to evaluate the distributions of robustness values. Some works [17, 20] have harnessed input-dependent sub-Gaussian noise with GPs. They, however, cannot derive the probability distribution itself. In [30, 34], the authors have employed the discretization of the target value to handle the complexity of the corresponding probability distribution. They have shown that such discretization achieves a more robust and accurate estimation than directly treating the target value.
|
In the DLGP, we reduce solving Problem 1 into estimating a posterior Dirichlet random field. The posterior parameter function of the Dirichlet random field is represented by multiple LGPs. The function is estimated to balance the reduction of the overconfidence and the goodness-of-fit to the data.
|
However, the uncertainty of the estimated distribution obtained from [23, 19] is determined depending on the kernel functions rather than the number of data points, which yields the possibility that the estimation is overconfident (or underconfident).
|
Quantifying the estimation uncertainty is important to identify whether the estimated distribution is reliable.
|
D
|
Damage identification based on modal analysis input: Three k-Nearest Neighbor (kNN) classifiers are developed to identify damage existence, location, and magnitude.
|
Considering the same dataset, the comparison of training time and memory costing for stacked GRU (stack number is 200) and kNN is shown in Table 9. The training time of stacked GRU is slightly shorter than kNN, and stacked GRU network requires less memory. The stacked GRU network file is only 704KB, while the kNN network requires 40,564KB. This difference becomes more significant as the dataset grows. Thus, the stacked GRU is employed to identify the existence of structural damage rather than kNN algorithm.
|
The stacked GRU method reduces the complexity of the GRU network by using a stacked input time series. The figure illustrates the principle of optimizing the GRU network by stacking sequences. When the original time series is used as the input, more GRU cells are required to capture the temporal correlations. At each time step, the chunks of all stacks (single value of each stack) are fed into the GRU simultaneously, and each stack is regarded as a new input feature of the network. By selecting an appropriate number of stacks, the length of the stacked sequence is shortened, reducing the number of required GRU units. The diagram of the stacked GRU network is shown in Figure 14. The choice of the number of stacks depends on the time signal correlation. For weak correlations, such as the stable periodic signal, a larger number of stacks improves training efficiency, while for strong correlations such as the non-stationary signal, the number of stacks needs careful consideration. The discussion on the number of stacks is provided in Section 7.1.
|
The damage identification strategy of CMLDI method also utilizes short-term acceleration time histories. For different damage detection requirements, the feature extraction methodology through signal processing is introduced, and the pertinent machine learning approach is presented in detail. Three different signal processing techniques are considered: time series stacking, Fourier transform, and Wavelet transform. Even with the incorporation of noise, the combined approach proves to be effective. The time-stacked features combined with the stacked GRU algorithm demonstrate strong performance in identifying the existence of damage. Compared to the kNN algorithm, it requires less training time and storage memory. However, the information on the magnitude of damage contained in the time-stacked sequences is insufficient, making it challenging to accurately identify the severity of the damage. In contrast, kNN shows higher accuracy in determining the damage magnitude by processing frequency sequences. Both stacked GRU and kNN show limited effectiveness in identifying the location of damage. Wavelet transform is used to generate time-frequency images of the acceleration signals since the time series and frequency sequence are not particularly sensitive to damage location. By converting the horizontal axis of these images to distance, the acceleration signals are normalized for different train speeds. A CNN method is then applied to classify the distance-frequency images, achieving 100% accuracy damage location identification results.
|
Damage existence identification: Extremely long temporal sequence is stacked by stacking approach. The stacked time series is fed into the stacked GRU network to determine damage existence.
|
D
|
Q}}_{n})\leq\bar{D}_{n},\forall n,over^ start_ARG italic_D end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( { bold_italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT , over~ start_ARG bold_italic_R end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_n = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT , over¯ start_ARG bold_italic_Q end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ≤ over¯ start_ARG italic_D end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT , ∀ italic_n ,
|
Because Problem (38) is a convex problem, we can utilize the interior-point method to globally solve it. Becauses of (35), the solution to Problem (38) is a feasible solution to Problem (P2.1).
|
Next, we formulate a joint transmission and compression optimization problem, which minimizes the PCRB subject to each BS’s transmit power constraint and fronthaul capacity constraint. The formulated problem is non-convex and quite challenging to solve, due to the coupling of the transmit covariance matrices and compression noise covariance matrices. To deal with the these issues, we first propose an alternating optimization (AO)-based method to separately optimize the transmit covariance matrices and the compression noise covariance matrices. Moreover, we show that when each type of covariance matrices is fixed, we can apply the successive convex approximation (SCA) technique to optimize the other type locally optimally.
|
Note that in Problem (38), the non-convex constraint (32c) in Problem (P2.1) is replaced by the convex constraint (38b).
|
Therefore, Problem (43) is a convex problem and can be solved by interior-point method. Then, the SCA-based algorithm to solve Problem (42) is summarized in Algorithm II. After the solution to Problem (42) is obtained, the compression noise covariance matrices can be obtained via (40).
|
C
|
Recently, the advent of Kolmogorov-Arnold Networks (KANs) has aimed to demystify the opaque nature of traditional neural network designs, offering enhanced interpretability and showcasing the promise of transparent AI research [19, 20].
|
In this paper, we propose Path-SAM2, which introduces the largest pre-trained model in the pathology field UNI, and adds the KAN classification module to replace manual dot prompts, achieving pathology semantic segmentation based on SAM2. We have validated it on three public pathology datasets, and Path-SAM2 has achieved the best segmentation results in terms of DSC (Dice Similarity Coefficient) and IOU (Intersection over Union) metrics compared to the baseline models. Our work confirms the potential of SAM2 in the research of semantic segmentation of pathological images.
|
The fine-tuning of SAM and SAM2 involves not using the UNI encoder, but rather incorporating our KAN classification module into the original SAM2 encoder and decoder for training.
|
Regarding the overall architecture of Path-SAM2, we have already discussed the comparison of the encoder part in the previous section. In this section, we discuss the impact of the KAN classification module on the model’s performance, and we will compare it with the MLP (Multilayer Perceptron).
|
Leveraging the flexibility and high accuracy of the KAN architecture, we utilize the KAN classification prompt module in Path-SAM2, thereby endowing our network architecture with more precise classification capabilities.
|
D
|
Various anomaly detection surveys with a focus on time series and online functionality have been published over the years.
|
For each model group, approaches are segmented by anomaly detection type as well as the metrics used to quantify anomaly detection performance.
|
It discusses publicly available data sets, as well as the anomaly detection metrics (Section 4) used in the literature to evaluate approaches.
|
The table shows whether the surveys discuss anomaly detection taxonomy, available data sets and metrics, and what type of approaches are identified.
|
Table 1: Summary of surveys discussing online anomaly detection in time series. Key is as follows, TX: anomaly detection taxonomy, DS: data set overview, EM: evaluation metrics overview, AB: analysis of benchmarking, SSAD: semi-supervised anomaly detection, USAD: unsupervised anomaly detection, DL: deep learning approaches.
|
C
|
C1,C2,C3,C4,C5.C1,C2,C3,C4,C5\displaystyle\text{C1}\text{,}\ \text{C2}\text{,}\ \text{C3}\text{,}\ \text{C4%
|
Obviously, P1 suffers from the non-convexity due to the non-convex nature of both the objective function and constraints C1-C4. We thus propose an AO framework to address this challenge, in which the SCA and the SDP methods are explored to optimize the APs’ transmit beamforming vectors, and the GA-PSO algorithm is leveraged to update the MAs’ positions [18].
|
In this subsection, we continue to optimize the positions of the MAs when the transmit beamforming is given.
|
In this paper, we investigate an MA empowered PLS mechanism for the cell-free SR system in the presence of an Eve. In the system, multiple distributed APs equipped with MAs collaboratively send confidential information to the primary user (PU) to resist eavesdropping from the Eve. At the same time, the backscatter device (BD) achieves secondary transmission by reflecting incident signals from APs to transmit its own information to the secondary user (SU). The MAs applied at the APs can flexibly adjust their positions to improve the channel conditions associated with the primary and secondary communications and worsen the transmission links to the Eve. Under this setup, we consider the problem of maximizing the secrecy rate of primary transmission for the PU under the quality of service (QoS) constraints imposed on the SU. To solve this non-convex optimization problem, an alternating optimization (AO) framework is adopted to decompose the coupling between the transmit beamforming vectors and the position variables of the MAs. For designing the transmit beamforming, we employ the successive convex approximation (SCA) and the semidefinite relaxation (SDR) to derive a near-optimal solution. For optimizing the positions of MAs, we utilize a genetic algorithm modified particle swarm optimization (GA-PSO) algorithm. Numerical results showcase the superior performance of the proposed MA empowered scheme with the GA-PSO algorithm in enhancing the secrecy rate.
|
In the paper, we have proposed a novel MA empowered PLS mechanism for a cell-free SR communication system, where the MAs at the distributed APs are utilized to guarantee the secure transmission from the APs to the PU and enhance the secondary transmission performance. Then, we have studied the secrecy rate maximization problem of the primary transmission for the PU under the constraints on the QoS constraints at the SU. To efficiently address the non-convex nature of the formulated problem, we have proposed an AO framework based on the SCA method, the SDR technique, and the GA-PSO algorithm to derive an approximately optimal solution. Finally, numerical results have shown that the proposed MA empowered scheme outperformance the FPA empowered scheme, indicating the potentiality of applying the MAs for performance enhancement. Moreover, compared to using the PSO algorithm for the optimization of MA positions, the utilization of the GA-PSO can guarantee an accuracy solution.
|
A
|
By including high-resolution hours as a categorical variable, we have achieved accurate predictions and narrower confidence bounds for the Absolute Laplacian and Gaussian RBF Kernels as shown in Figure 8 and Figure 9. The superior performance of the Absolute Laplacian over the Gaussian RBF is largely attributable to the robustness of the Manhattan distance compared to the Euclidean distance(Aggarwal et al., 2001). The empirical demonstration is provided by comparing the covariance kernels in Figure 7. To complete our analysis, Figures 2 and 3 show a satisfactory prediction and narrow confidence for Switzerland and Germany even when the method slightly fails to approximate the real value.
|
Therefore, our second contribution is applying kernel quantile regression to the medium load forecasting setting, see section 4, sticking to best practices and guidelines of popular literature reviews in the field of probabilistic electric load forecasting (PLF) (Lago et al., 2021; Hong and Fan, 2016; Nowotarski and Weron, 2018b).
|
This article addresses probabilistic forecasting by adopting the kernel quantile regression (KQR) method within the RKHS framework. This method was introduced in (Takeuchi et al., 2006) and further investigated in (Li et al., 2007; Zhang et al., 2016; Sangnier et al., 2016; Zheng, 2021). The method offers a non-parametric and non-linear way to provide probabilistic forecasts. The main contribution of this article is to perform a probabilistic forecast with KQR for Swiss, Austrian, and German energy systems, where the data are extracted from ENTSO-E Transparency Platform, SECURES-Met (Formayer et al., 2023) and C3S Energy(Dubus et al., 2023), which is designed to assess the impacts of climate variability and climate change on the energy sector. The probabilistic forecast with KQR has also been validated in the GEFCom test case, where our Python-based open-source implementation compares favourably with the top teams in the probabilistic forecast of electricity load and price.
|
We now apply KQR to the setting of probabilistic load and price forecasting. We use the GEFCom2014 (Hong et al., 2014) data to carry out our experiments. The GEFCom is a series of competitions that have been created with the intent of improving forecasting practices, addressing the gap between academia and industry, and fostering state-of-the-art research in the field of energy forecasting (Hong et al., 2016).
|
Finally, to recreate the setting of GEFCom2014 and to provide a fair comparison, we adhere rigorously to the rules of the competition. Next, we study the performance of KQR in the load and price tracks.
|
C
|
Table 5: Quantitative results of SAM 2 on PolypGen 23 video sequences. As the input prompt, we are using the bounding box for the first frame.
|
Figure 1: Qualitative Assessment of Segmentation Outcomes on Kvasir-SEG and CVC-300 Datasets using SAM [17] and SAM 2.
|
In this paper, we explore the application of SAM and SAM 2 to zero-shot polyp image and video segmentation. We evaluate their performance on benchmark datasets and compare them to existing methods. Our results demonstrate the potential of these models for efficient and accurate image and video polyp segmentation, thereby facilitating the way for improved clinical workflows and early cancer detection.
|
Table 4: A quantitative comparison of five public polyp segmentation datasets (CVC-ClinicDB, Kvasir, CVC-ColonDB, ETIS, and CVC-300) with state-of-the-art (SOTA) methods is presented. Bold indicates the best performance.
|
First, we compare the zero-shot segmentation results of the SAM and SAM 2 models on the CVC-ClinicDB, Kvasir-SEG, and CVC-300 datasets without fine-tuning. We evaluated four different prompt settings:
|
D
|
The results demonstrate the efficacy of our algorithm in estimating both the fine and coarse scale trajectories through learning the unknown process noise covariances at each scale. Figures 2(a)-2(d) show the true versus estimated states for each individual d𝑑ditalic_d in the coarse time scale. The root mean square error (RMSE) averaged across all coarse time scale points t𝑡titalic_t for each individual and for each dimension m∈MX~𝑚subscript𝑀~Xm\in M_{\tilde{\textbf{X}}}italic_m ∈ italic_M start_POSTSUBSCRIPT over~ start_ARG X end_ARG end_POSTSUBSCRIPT is shown in Table 2. Similarly, the RMSE averaged across all fine time scale points k𝑘kitalic_k within each coarse time step t𝑡titalic_t for each d𝑑ditalic_d is shown in Tables 3(a)-3(d). The true versus estimated trajectories are shown in Figures 22(a)-22(d) for coarse time point t=11𝑡11t=11italic_t = 11. The trace plots for each dimension and each d𝑑ditalic_d for the coarse scale are in Figure 3, and for the fine time scale are in Figure 4.
|
Overall, the trace plots for each coarse scale dimension and each individual d𝑑ditalic_d demonstrate good convergence, but with higher variability in some cases (e.g. dimension 2, d=2𝑑2d=2italic_d = 2). Similarly, the trace plots for each fine scale dimension also demonstrate good convergence, but with higher variability for dimensions 2 and 3. The results suggest that the model is successfully learning the process noise covariances. The low RMSE values across most individuals and dimensions indicate that the algorithm is effective in capturing the latent states across both scales of the system while learning the noise with a high degree of precision. However, slight variations in RMSE across different dimensions and individuals suggest that the model’s performance could be further optimized, particularly for specific cases where higher errors were observed. In general, the results demonstrate that our approach is effective in modeling and estimating multiscale complex systems with feedback between each scale. Future work could focus on refining the algorithm and further reducing the process noise covariances.
|
where 𝐀𝐀\mathbf{A}bold_A is an n×n𝑛𝑛n\times nitalic_n × italic_n adjacency matrix describing interactions between the dimensions of 𝐱k,dtsuperscriptsubscript𝐱𝑘𝑑𝑡\mathbf{x}_{k,d}^{t}bold_x start_POSTSUBSCRIPT italic_k , italic_d end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT and 𝐁𝐁\mathbf{B}bold_B is a D×D𝐷𝐷D\times Ditalic_D × italic_D adjacency matrix describing interactions between the different individuals d𝑑ditalic_d in the coarse time scale. In this context, the weights wksubscript𝑤𝑘w_{k}italic_w start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT represent the weighted contributions of each fine-scale developmental time point to the coarse-scale state. These weights encapsulate the influence of various stages of development on the overall fitness and hereditary characteristics of the organism.
|
The results demonstrate the efficacy of our algorithm in estimating both the fine and coarse scale trajectories through learning the unknown process noise covariances at each scale. Figures 2(a)-2(d) show the true versus estimated states for each individual d𝑑ditalic_d in the coarse time scale. The root mean square error (RMSE) averaged across all coarse time scale points t𝑡titalic_t for each individual and for each dimension m∈MX~𝑚subscript𝑀~Xm\in M_{\tilde{\textbf{X}}}italic_m ∈ italic_M start_POSTSUBSCRIPT over~ start_ARG X end_ARG end_POSTSUBSCRIPT is shown in Table 2. Similarly, the RMSE averaged across all fine time scale points k𝑘kitalic_k within each coarse time step t𝑡titalic_t for each d𝑑ditalic_d is shown in Tables 3(a)-3(d). The true versus estimated trajectories are shown in Figures 22(a)-22(d) for coarse time point t=11𝑡11t=11italic_t = 11. The trace plots for each dimension and each d𝑑ditalic_d for the coarse scale are in Figure 3, and for the fine time scale are in Figure 4.
|
Bayesian inference in state-space models has been widely used in a range of biological applications, from gene regulatory network inference ([2], [3]) to ecology ([4], [5]). However, to the author’s knowledge, there is no existing approach that integrates development and heredity into a unified modeling framework with Bayesian inference to learn unknown states and parameters at both time scales. In this work, we introduce a novel multiscale state-space model designed to capture the interaction between developmental and hereditary processes across different time scales with feedback between the scales. The model integrates fine-scale states that represent individual developmental stages and coarse-scale states that reflect hereditary traits across generations. We develop a Bayesian learning approach to estimate the unknown states by learning the process noise covariances. More specifically, we develop a Particle Gibbs with Ancestor Sampling (PGAS) algorithm, which combines particle filtering with ancestor sampling and Gibbs sampling for effective state and parameter estimation.
|
A
|
J:-min‖|x⟩‖2=1⟨x|C|x⟩.:-𝐽subscriptsubscriptnormket𝑥21quantum-operator-product𝑥𝐶𝑥\displaystyle J\coloneq\min_{\left\|\ket{x}\right\|_{2}=1}\quad\braket{x}{C}{x}.italic_J :- roman_min start_POSTSUBSCRIPT ∥ | start_ARG italic_x end_ARG ⟩ ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 1 end_POSTSUBSCRIPT ⟨ start_ARG italic_x end_ARG | start_ARG italic_C end_ARG | start_ARG italic_x end_ARG ⟩ .
|
Next, Figure 2(a) shows the robust optimal annealing protocol which is computed by solving the robust quantum optimal control problem in Section 3 with regularization parameter ζ=0.2𝜁0.2\zeta=0.2italic_ζ = 0.2. The singular region is an interval in the robust case, compare Section 4. Hence, we obtain a larger singular control section with smoothly varying input compared to the nominal case. As long as μ(|x∗⟩,|λ∗⟩)𝜇ketsuperscript𝑥∗ketsuperscript𝜆∗\mu(\ket{x^{\ast}},\ket{\lambda^{\ast}})italic_μ ( | start_ARG italic_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_ARG ⟩ , | start_ARG italic_λ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_ARG ⟩ ) lies inside the singular region, the optimal input u∗superscript𝑢∗u^{\ast}italic_u start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is singular. However, the optimal QA protocol still starts and ends with a bang. The kinks of the optimal input u∗superscript𝑢∗u^{\ast}italic_u start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT can be explained by the non-smoothness of the spectral norm.
|
In QA, the protocol u𝑢uitalic_u is found by steering the quantum system towards the ground state by smoothly varying a parametrized Hamiltonian H(u)𝐻𝑢H(u)italic_H ( italic_u ).
|
The idea of QA is based on the adiabatic theorem [farhi_quantum_2000]. A practical and widespread class of possible solutions for the above problem consists of bang-bang control strategies, commonly referred to as the quantum approximate optimization algorithm (QAOA, [farhi_quantum_2014]). Here, either the Hamiltonian B𝐵Bitalic_B or the Hamiltonian C𝐶Citalic_C is applied to the quantum system in an alternating fashion. Bang-bang protocols are motivated by their optimality in different optimal control setups [kirk_optimal_2004].
|
Using Pontryagin’s maximum principle, previous studies showed that optimal QA protocols can contain singular sections [brady_optimal_2021]. A singular section is a section in the QA protocol where u𝑢uitalic_u lies in the interior of [0,1]01[0,1][ 0 , 1 ], i.e., a Hamiltonian that smoothly interpolates between B𝐵Bitalic_B and C𝐶Citalic_C is applied to the quantum system. In addition, [brady_optimal_2021] shows that optimal QA protocols, i.e., solutions of the above QA problem, always start with a bang and always end with a bang. This has led to new insights into the design of optimal QA protocols.
|
B
|
TAO Dataset. The thyroid-associated orbitopathy (TAO) dataset is an in-house multimodal dataset of thyroid-associated orbitopathy collected from the Gerald Choa Neuroscience Centre MRI Core Facility at the Prince of Wales Hospital in Hong Kong. This dataset comprises 100 cases, each of which underwent orbital MRI and received a definitive TAO diagnosis. Each case includes pre-contrast T1-weighted (T1) and fat-suppressed post-contrast T1-weighted (T1c), which are sequentially acquired from the same patient. Notably, the T1/T1c data in the TAO dataset is unpaired due to differences in in-plane resolution. All MRI was performed on a 3.0 T Siemens scanner with a Head/Neck 64 Channel coil. T1 data was acquired using volumetric interpolated breath-hold examination (VIBE) pulse sequence with in-plane resolution of 0.555 ×\times× 0.555 mm2 and slice thickness of 1.5 mm. T1c data was collected using a fat-suppressed spoiled-gradient echo (GRE) core sequence with in-plane resolution of 0.625 ×\!\times\!× 0.625 mm2 and slice thickness of 1.5 mm. The manual annotation was performed by a trained rater using ITK-SNAP, under the guidance of a senior radiologist who has over 20 years of experience. The segmentation mask includes 8 anatomical structures of extraocular muscles, i.e., herniation of the lacrimal gland (LG), and compression and edema of the optic nerve (ON), inferior oblique muscle (IOM), superior oblique muscle (SOM), superior rectus (SR), lateral rectus (LR), medial rectus (MR), and inferior rectus (IR). For our experiments, we randomly selected 80 T1/T1c image pairs from 100 cases as the training set, and the remaining 20 T1/T1c image pairs for the testing and validation set.
|
Figure 2: Comparison of segmentation performance across different models on the MS-CMRSeg [11] dataset using various labeled data ratios, evaluated in terms of Dice score.
|
Table 1: Quantitative results of our approach and other methods on the AMOS [5] and TAO datasets in terms of Dice score with 10% labeled data.
|
Figure 3: (a) and (b) Visual comparison between the CML [13] and our method on the TAO and MS-CMRSeg [11] datasets. (c) Ablation analysis of the cross CMC strategy. (d) Ablation study of single and multiple modalities for training our model on the MS-CMRSeg dataset [11].
|
We used the CLIP-Driven Universal Model pre-trained weights [6] as our backbone when using a single RTX 3090 GPU. When using multiple GPUs, we used the SAM-Med3D [10] pre-trained weights [6] as our backbone. We conducted extensive experiments, using 10% and 20% labeled data ratios from three datasets for training, and employed the Dice score and Average Symmetric Surface Distance (ASSD) for quantitative evaluation. We compared our model with other multimodal learning methods, such as EFCD [4] and mmFor [14]. Additionally, we compared our model with semi-supervised multimodal learning approaches, such as UMML [18] and CML [13], and the fully supervised method V-Net [7]. Table 1 presents the quantitative performance of different methods on the AMOS [5] and TAO datasets. The results show that our framework considerably surpasses the comparison methods in both CT and MRI modalities, achieving high Dice scores in a label-scarce scenario on the AMOS [5]. As is well known, unpaired data from different modalities originate from different patients and cannot be directly aligned. This complicates the extraction of consistent features from different modalities compared to paired data. Despite the AMOS dataset [5] is unpaired, our model demonstrates superior performance. This can be attributed to its architecture, which is based on the Transformer model of SAM-Med3D [10]. The input images are divided into patches, which are then linearly embedded and combined with positional encoding, providing strong feature learning capability. We then utilize the CSC loss to align the channel-wise features from multimodal images. Furthermore, we introduce a novel MIA module to effectively harness modality-independent knowledge from each modality, facilitating efficient feature fusion. Consequently, our model achieves superior performance in semi-supervised multimodal segmentation tasks. For the TAO dataset, specifically for the T1 and T1c modalities, our framework has also shown promising results. Fig. 2 provides a comparative analysis of segmentation performance across various models on the MS-CMRSeg dataset. The performance results evaluated using the ASSD score on the MS-CMRSeg dataset, are presented in Table 3 of the supplementary material. It is evident from the results that our method surpasses the comparison methods in both BSSFP and LGE modalities, consistently achieving high Dice scores.
|
B
|
However, research on SAR to optical translation using Very High Resolution (VHR) data with sub-meter resolution is extremely scarce. Most existing studies have utilized datasets that fall short of sub-meter VHR standards [1, 2, 3, 4, 5]. The widely used SEN12 dataset [6] consists of paired SAR and optical images at 10-meter resolution. Although this dataset has been immensely valuable for various remote sensing applications, its relatively coarse resolution limits its applicability for VHR SAR to optical translation tasks. Other datasets such as WHU-OPT-SAR [7] and SARptical [8], while offering improvements in resolution, still do not meet the sub-meter resolution criteria for true VHR data.
|
Yet, even with this inherent advantage of utilizing these low-resolution images, existing GAN-based approaches for translating SAR-to-optical have struggled to achieve practical performance, facing issues like training instability, mode collapse, and geometric loss with complex scenes [1, 2, 3], [9]. Only a few recent studies [4, 5] have explored Conditional Diffusion Models (CDMs) to overcome these limitations of GAN-based models. CDMs currently dominate the image synthesis field [10] including image-to-image translation. Still, CDMs suffer from difficulties with limited model generalization despite their potential. They lack robust theoretical foundations to ensure that the outcome accurately represents the intended conditional distribution [11]. These models often experience performance degradation when translating between significantly disparate domains.
|
Experimental results show our conditional BBDM framework significantly improved SAR-to-optical translation quality. The proposed approach outperforms both GAN-based models and conditional Latent Diffusion Model (LDM) [13] across various metrics. These results highlight the benefits of using conditions. This proves that our proposed method provides a robust framework for bridging the gap between SAR and optical imagery.
|
3) We conducted SAR to optical image translation experiments on a 0.5m resolution VHR imagery dataset (MSAW) and demonstrated that the proposed model significantly outperforms both conditional LDM and existing GAN-based models across multiple metrics.
|
1) We introduce a novel image-to-image translation framework (BBDM) to the SAR to optical research field. This offers an alternative to the predominantly used GAN models.
|
A
|
EEG data varies among subjects, hardware, and environmental factors and is susceptible to noise and artifacts. Therefore, a lot of research in this field focuses on signal cleaning to
|
Historically, MATLAB has been the dominant platform for EEG research, leading to the development of numerous frameworks and pipelines for EEG analysis. EEGLAB [3], Brainstorm [4] and FieldTrip [5]
|
Current preprocessing methods do not scale to the substantial volume of data required for SSL. While current pipelines allow for manual correction and validation of data, this makes them subjective and challenges reproducibility. Additionally, the correction process is often too task-specific for SSL, given the diverse nature of downstream tasks. Common preprocessing methods also tend to lead to significant data loss, especially considering that the largest available EEG dataset, the Temple University Hospital EEG Corpus (TUEG) [2], comprises EEG of highly variable signal-to-noise. Most approaches become unfeasible with terabytes of data, underlining the need for robust, optimised pipelines capable of efficiently handling such large volumes.
|
The ’industry standard’ in machine learning is to let complex deep learning models handle massive datasets with limited preprocessing, especially with self-supervised learning. For electroencephalogram (EEG) data, Kostas et al. pretrained a complex transformer model on the massive Temple University Hospital (TUH) EEG Corpus with rather rudimentary preprocessing. However, the TUH EEG Corpus presents significant challenges due to its variability in signal-to-noise, equipment used, length of recording, and more. We introduce an efficient preprocessing pipeline designed to handle variability within one or more EEG datasets and combine them into a single preprocessed dataset useful for self-supervised learning applications. The Python-based pipeline improves stability, convergence, and contrastive accuracy during pretraining and produces a more suitable latent space for downstream classification tasks. More specifically, our probing results for downstream accuracy show a significant improvement for several downstream classification tasks when pretraining using our preprocessing pipeline compared to a simple baseline preprocessing pipeline. Besides the plug-and-play preprocessing pipeline, we also present tools for reproducible preprocessing of the TUH EEG Corpus ready for future development of pretrained EEG foundation models. In conclusion, our results form evidence that physically motivated preprocessing is useful for self-supervised learning of EEG representations.
|
With our pipeline, we address a major challenge of large-scale EEG data preprocessing pipelines. However, a major consideration is whether to incorporate ICA with ICLabel. This technique is well-established in EEG preprocessing and analysis and is part of the AutoMagic [7] pipeline. Our analysis indicates that including ICA does not significantly improve the pretraining performance. Rather, examining the probing results slightly reduces the downstream accuracy when used in pretraining and significantly when applied to downstream datasets. This suggests that the model might either miss artifacts in SPEED, perhaps use them in classification or that ICLabel incorrectly classifies independent components, leading to the removal of significant data.
|
A
|
The Object Motion Sensitivity (OMS) algorithm proposed in [15], grounded in experimental neuroscience, offers a robust framework for object motion segmentation in the presence of ego-motion. The algorithmic implementation was designed to be tunable, allowing for application-specific configuration adjustments to optimize performance across diverse environments. While the original configuration provides improved performance for the used ego-motion datasets, it failed to account for the overhead associated with hardware circuit implementation. Consequently, we study the effects of various key parameters on the algorithm’s performance and their relation to the hardware implementation. The software algorithmic analysis is then used to guide CMOS circuit design which enables an OMS circuit that is re-configurable during runtime.
|
An algorithmic implementation of the OMS circuit derived from experimental neuroscience was presented in Snyder et al. [15]. This method proposed a software algorithm based on convolutional kernels to compute OMS from DVS data and distinguish object motion from camera motion (ego-motion). This previous work provided a quantitative comparison with several state-of-the-art methods for ego-motion compensation and object motion segmentation.
|
The OMS algorithm from [15] takes as input the photoreceptor activations of bipolar cells, represented by DVS frames. This algorithmic implementation consists of two convolutional squared filters; the center kernel and the surround kernel representing the center and surround regions from the human visual system. The largest of the kernels (surround) acts as the connection between the bipolar cells and the amacrine cells, while the smallest kernel (center) serves as the synapse between RGCs and their corresponding bipolar cell cluster.
|
Previous work proposed an algorithmic implementation of the OMS biological circuitry derived from experimental neuroscience [15]. The OMS algorithm, tested on synthetic and real DVS data, aimed to functionally replicate neural computations performed by the amacrine and retinal ganglion cells (RGC).
|
The retina is one of the core components of the human visual system, and is made of three main layers: the photoreceptor layer, the outer plexiform layer (OPL), and the inner plexiform layer (IPL) [12]. Each layer plays a fundamental role in the computation of Object Motion Sensitivity (OMS), among over 40 additional visual features. The photoreceptor layer transduces visual stimuli into electrical signals. Then, bipolar cells in the OPL respond to luminance changes. For OMS computations, amacrine cells form connections between bipolar signals in the OPL and the Retinal Ganglion Cells (RGC) in the IPL. Through inhibitory and excitatory synapses, amacrine cells integrate contrast signals from a global area (surround) and subtract them from the local area (center). Consequently, RGCs corresponding to OMS circuit respond to motion in the local area and to differential motion between the global and local areas [2]. Current neuromorphic vision sensors such as DVS and DAVIS are inspired by computations in the photoreceptor layer and OPL [13, 14]. In this work, we investigate hardware-algorithm re-engineering of the retinal OMS circuit comprising OPL and IPL functionalities. A brief introduction to related works on camera-compatible hardware and application-driven algorithmic implementation of the retinal OMS circuit is given below.
|
C
|
S3Attention (r,s1,s2=8𝑟subscript𝑠1subscript𝑠28r,s_{1},s_{2}=8italic_r , italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 8)
|
We conduct extensive experiments over Long-term sequences, long-term time series forecasting and GLUE tasks. In particular, the Long Range Arena benchmark [40], achieves an average accuracy of 64% and 66% with fixed parameters (suggested setting in [40, 41]) and fine-tuned parameters respectively. It improves from 62% of the best Attention-type model. Moreover, it also has a comparable performance with the recent state-of-the-art long-term time series forecasting models for long-term time series forecasting and GLUE tasks.
|
We propose S3Attention, a robust and efficient Attention architecture for modeling long sequences with a good balance between feature preserving and noise resistance. It aggregates a Fourier convolutional stem smoothing information among tokens and a Skeleton-Sketching-inspired efficient Attention. In particular, our proposed Skeleton Attention directly samples the columns and rows of the token matrix. Such a design increases the model’s robustness and gives us a positive near-linear complexity side effect. We conduct a thorough theoretical and experimental analysis of the proposed model and show its effectiveness. Lastly, extensive experiments show that the proposed model achieves the best performance on Long Range Arena and a state-of-art performance in long-term time series forecasting tasks compared to various Attention-based baselines.
|
In this section, we test our S3Attention on Long Range Arena (LRA) datasets [40] and six real-world time series benchmark datasets for long-term forecasting. We also evaluate the transfer learning ability of S3Attention on GLUE tasks. In recent literature (e.g., [66, 67, 68]), the RNN type model is also widely discussed for long sequence tasks. We don’t include them as benchmark models since in this paper we focus on improving the Attention structure. The testing environment contains 12 Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz CPUs, 1 TESLA V100 SXM2 32G, and 90 GB memory. We implement the S3Attention based on the official codes of [23] and [12] for LRA and time-series forecasting tasks respectively. The implementation details for S3Attention are provided in the supplementary material and the code is available online at https://github.com/wxie9/S3Attention.
|
To further evaluate the proposed S3Attention, we also conduct extensive experiments on six popular real-world benchmark datasets for long-term time series forecasting, including traffic, energy, economics, weather, and disease as shown in Table 2
|
D
|
C. Hawthorne, A. Stasyuk, A. Roberts, I. Simon, C. A. Huang, S. Dieleman, E. Elsen, J. H. Engel, and D. Eck, “Enabling factorized piano music modeling and generation with the MAESTRO dataset,” in 7th International Conference on Learning Representations, New Orleans, USA, 2019.
|
V. Emiya, N. Bertin, B. David, and R. Badeau, “MAPS - a piano database for multipitch estimation and automatic transcription of music,” INRIA, France, Research Report 00544155, 2010. [Online]. Available: https://hal.inria.fr/inria-00544155
|
V. Emiya, N. Bertin, B. David, and R. Badeau, “MAPS - a piano database for multipitch estimation and automatic transcription of music,” INRIA, France, Research Report 00544155, 2010. [Online]. Available: https://hal.inria.fr/inria-00544155
|
V. Emiya, N. Bertin, B. David, and R. Badeau, “MAPS - a piano database for multipitch estimation and automatic transcription of music,” INRIA, France, Research Report 00544155, 2010. [Online]. Available: https://hal.inria.fr/inria-00544155
|
V. Emiya, N. Bertin, B. David, and R. Badeau, “MAPS - a piano database for multipitch estimation and automatic transcription of music,” INRIA, France, Research Report 00544155, 2010. [Online]. Available: https://hal.inria.fr/inria-00544155
|
A
|
Age was binarized according to the decision boundaries in the DTC: age below 80 was established as 0, otherwise it was set to 1.
|
In this paper, we propose learning clinical information in the form of discrete binary and ordinal variables to improve feature representation of ICH CT scans in an end-to-end multi-task prognosis model. Our contributions can be summarized as (1) evaluating the clinical and demographic variables with the highest impact on ICH prognosis through machine learning (ML) tabular models, and their best encoding for the multi-task models; (2) introducing the two primary tabular variables driving the prognosis (GCS and age) in two multi-task prognostic image models (binary and ordinal); (3) performing ablations to show the predictive power of the proposed multi-task models; (4) assessing interpretability saliency maps [borys2023explainable] and their alignment with neuroradiologist’s knowledge, ultimately comparing the prognostic capabilities of the models with four board-certified NRs.
|
The proposed method aims to enhance the image model feature representation by learning a shared loss regularization across the main decision-driving variables in the ICH prognosis tabular models. To this end, we first evaluated the prognostic capability of the tabular variables available. Subsequently, we used a 3D DenseNet121 model [monai_consortium_2023_8436376] as feature extractor, and we designed two multi-task image models that aggregated the loss in the prognosis task with the loss of one clinical and one demographic variable, which was back-propagated through the image model. The method is presented in Fig. 1, and explained below.
|
F. MT (ord GCS, bin age) highlights the highest density component of the bihemispheric subdural hematomas present in the patient. Baseline and MT (bin GCS, bin age) show less useful saliency maps.
|
The first multi-task model predicted prognosis, binary GCS and binary age, hereafter referred to as MT (bin GCS, bin age). The second multi-task model integrated prognosis, three class ordinal GCS and binary age, hereafter referred to as MT (ord GCS, bin age). Both models used a DenseNet121 backbone for feature extraction, and the loss was combined following Eq. 1 to enhance the feature representation for each task:
|
D
|
This enabled the robot not only to imitate the teacher but at the same time to modify the action to complete the task.
|
We developed a system that can adaptively perform scrubbing and rinsing dirty dishes based on visual and force information.
|
the robot was trying to rinse where there is water as in \figrefrinse, scrubbing with respect to the position of the dirt as in \figrefscrub.
|
As a result, the robot can perform dextrous manipulation such as scrubbing moderately and spreading water on the dish.
|
the human judges whether the robot is correctly scrubbing the dishes with the sponge pressed against them or if it is an unexpected operation such as almost dropping the dishes.
|
C
|
For baseline models, we have selected several denoising methods, ranging from classical signal processing techniques to modern CNN-based frameworks:
|
In conclusion, this paper introduces a novel SDE-based diffusion model for removing multiplicative noise. The work presents the construction of the forward and reverse SDEs that directly captures the dynamics of the noise model. In addition, it also establishes the training objective as well as multiple different sampling equations based on Probability flows and DDIM techniques. The proposed model is compared to classical image processing algorithms, including BM3D and SRAD, as well as the modern CNN-based methods. Extensive experiments on different datasets demonstrate that our method outperforms the current state-of-the-art denoising models in perception-based metrics across all noise levels, while still remaining competitive in PSNR and SSIM.
|
The block-matching and 3D filtering (BM3D) was proposed in (Dabov et al. 2007), it partitions the image into multiple smaller patches and performs collaborative filtering to remove the noise. The method takes advantage of redundancy and consistent information across patches to generate a clean image, it achieved state-of-the-art performance at the times without requiring prior knowledge about noise statistics.
|
Multiplicative noise removal is a long standing problem in computer vision and has been studied by many researchers over the past few decades (Huang, Ng, and Wen 2009)(Bioucas-Dias and Figueiredo 2010)(Huang et al. 2012)(Shan, Sun, and Guo 2019)(Feng and Zhu 2021). Unlike additive noise, which is usually the result of thermal fluctuations during image acquisition or transmission, multiplicative noise happens when multiple copies of the signal with random scaling factors are added together. This often happens due to the internal physical construction of the image capturing devices, i.e. optical lenses, radar/laser imaging, ultrasound sensors, etc. Because of this, removing multiplicative noise, sometimes referred to as despeckling, often requires more sophisticated approaches compared to its counterpart additive noise. Popular approaches include modelling the noise using Partial Differential Equations (PDEs) (Yu and Acton 2002) (Chen et al. 2012), converting into additive domain and optimize using Total Variation (TV) objective (Shi and Osher 2008), and applying MAP estimation (Aubert and Aujol 2008). Classical methods based on block-matching technique also works decently for this problem (Dabov et al. 2007).
|
To the best of our knowledge, we are the first to directly model this problem using SDE, which captures the dynamics of the noise process, and derive the sampling equation which is then used to perform denoising.
|
B
|
For the ISIC and ZD-LCI-GIM datasets, our VM-UNetV2 outperforms other models in terms of the IOU, DSC and Acc metrics.
|
TABLE II: Comparative experimental results on the Kvasir-SEG and ClinicDB datasets(Bold indicates the best)
|
Gastrointestinal polyp datasets: The Kvasir-SEG [18], ClinicDB [19], ColonDB [20], Endoscene [21], and ETIS [22] are currently publicly available polyp datasets.
|
TABLE III: Comparative experimental results on the ColonDB, ETIS and Endoscenedatasets(Bold indicates the best)
|
On the Kvasir-SEG, ClinicDB, and ETIS datasets, our algorithm achieved state-of-the-art (SOTA) performance, and it also showed competitive performance on ColonDB and Endoscene.
|
D
|
Besides, compared with SwinIR [35], OAPT only increases 1.5M parameters of offset predictor but improves averagely about 0.06dB on three datasets.
|
The results on color double JPEG images are illustrated in Tab. 2. We mainly compare OAPT with SwinIR and HAT-S [9], which have similar parameters with ours.
|
As double JPEG image restoration is a relatively new task, we compare our method with DnCNN [73], RNAN [77], SwinIR [35], HAT [9], ART [71] and FBCNN [28]. As for the experiment of grayscale double JPEG image restoration, except FBCNN, the others are all fine-tuned on the double JPEG compression dataset. For relatively fair comparison, we train the small version of HAT(HAT-S), and use its pretrain weights for super-resolution task to initialize it. Both SwinIR and the reconstructor of our model are initialized on the weights of SwinIR for JPEG artifacts reduction, and we also train ART with its pretrain model for JPEG artifacts reduction. Following the same setting in [35, 28], the PSNR, SSIM [64] and PSNR-B [67] results are used as main metrics.
|
To find out whether the ground-truth offsets are significant to the network and how the prediction accuracy affects it, we conducted the experiment as follows. We remove the offset predictor from OAPT and fine-tune it with ground-truth offsets making the OAPT model a non-blind model, termed as OAPT* or Ours*, which shares the same parameters with SwinIR. Tab. 4 illustrates some results of HAT-S, SwinIR, OAPT and OAPT* under other compression types and the accuracy of the offset predictor in OAPT. We find OAPT gets the best performance averagely with ground-truth offsets (OAPT*) and increases no more parameters and computation complexity, improving about 0.08dB over SwinIR on average. Moreover, with the prediction accuracy increasing, the gap of performance between OAPT and OAPT* gets narrower. It indicates with the low accuracy, the performance only benefits from larger receptive field by non-local grouping, while with the high accuracy, it benefits from both larger receptive field and proper pattern clustering by correct offsets for better information extraction.
|
As the main backbone is similar with SwinIR, we initialized the image reconstructor of ours with the pretrained model of SwinIR, and fine-tune it on double compression datasets.
|
A
|
ECAPA-TDNN network is used as the speaker embedding extractor for its simplicity, with 1024 channels in the convolutional frame layers. After training, the 192-dimensional speaker embeddings are extracted through the backbone and speaker encoder. The whole utterance is used to extract speaker embeddings during the test stage. The cosine similarity is used for scoring. And the equal error rate (EER) is used as the performance metric.
|
Table 1 and 2 show the performance under the seen and unseen noisy conditions, respectively. To observe the embedding distribution, we selected 40 speakers from the VoxCeleb1 test set, and randomly sampled 20 utterances from each speaker to generate speaker embeddings. The t-SNE visualization of speaker embeddings in visible and invisible noise conditions are plotted in Figure 2. Clean means the baseline is trained on the original dataset. Joint means the baseline is trained on the original dataset and noisy dataset. As anticipated, the performance of the baseline, trained on the original dataset, markedly degrades in noisy environments. Data augmentation enhances the robustness of the model to noise. While Joint training surpasses clean training in effectiveness, the extent of this improvement is constrained. The model trained with noise-disentanglement metric learning (NDML) [24] is used to compare.
|
ECAPA-TDNN network is used as the speaker embedding extractor for its simplicity, with 1024 channels in the convolutional frame layers. After training, the 192-dimensional speaker embeddings are extracted through the backbone and speaker encoder. The whole utterance is used to extract speaker embeddings during the test stage. The cosine similarity is used for scoring. And the equal error rate (EER) is used as the performance metric.
|
The proposed noise-disentanglement based on adversarial training architecture consists of three modules: a backbone B𝐵Bitalic_B, a disentanglement module and an adversarial training module, as illustrated in Figure 1. The disentanglement module includes a speaker encoder Essubscript𝐸𝑠E_{s}italic_E start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT, a speaker-irrelevant encoder Eisubscript𝐸𝑖E_{i}italic_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and a reconstruction module D𝐷Ditalic_D. And the adversarial training module, which includes a binary domain classifier with a gradient reversal layer, is used to discourage Essubscript𝐸𝑠E_{s}italic_E start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT from encoding acoustic condition information. The parameters of the backbone, speaker encoder, speaker-irrelevant encoder and decoder are accordingly denoted as θ𝜃\thetaitalic_θ, ϕssubscriptitalic-ϕ𝑠\phi_{s}italic_ϕ start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT, ϕisubscriptitalic-ϕ𝑖\phi_{i}italic_ϕ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and ϕdsubscriptitalic-ϕ𝑑\phi_{d}italic_ϕ start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT, respectively. Finally, the reconstruction loss, feature-robust loss, classification loss and adversarial loss are used jointly to optimize the speaker encoder and backbone network.
|
The speaker encoder and speaker-irrelevant are 2-layer AutoEncoders with hidden size of 1024. The decoder is almost the same as the encoders.
|
D
|
This training strategy not only addresses the issue of insufficient training data but also enhances the overall performance of our model. The effectiveness of this training strategy will be examined in Section 5.3.
|
Therefore, we trained our model with K𝐾Kitalic_K values of {10,20,30,40,50,60}102030405060\{10,20,30,40,50,60\}{ 10 , 20 , 30 , 40 , 50 , 60 } to study the prosody expressiveness of the generated speech. The results are presented in Table 2. All models perform similarly in terms of STOI, as expected due to the proposed method’s primary focus on prosody expressiveness. When K=50𝐾50K=50italic_K = 50, the speech generated by our model exhibits the lowest values of prosody-related metrics GPE and FFE. Lower values of GPE and FFE indicate that the prosody of the synthesized speech is closer to that of the ground-truth speech, demonstrating greater prosody expressiveness. This suggests that appropriately increasing the maximum multimodal context length can enhance the prosody expressiveness of synthesized speech. However, with increasing K𝐾Kitalic_K, the LSE-C, and LSE-D metrics deteriorate, implying that excessively long phoneme sequences input increases dubbing difficulty. Hence, we select K=50𝐾50K=50italic_K = 50 as the optimal maximum multimodal context length to strike a balance between prosody expressiveness and dubbing difficulty.
|
(2)F0 Frame Error (FFE) [31]: measures the percentage of frames with either a voicing decision error or a pitch error exceeding 20%. The GPE and FFE metrics are related to prosody; lower values indicate that the synthesized speech demonstrates greater prosody expressiveness.
|
In continuous videos, human speech maintains consistency, with the prosody of the current sentence influenced by context speech [10]. In the AVD task, the generated dubbing will be combined with the original context in the final video. Therefore, ensuring that the prosody of the generated dubbing aligns with the multimodal context is crucial, necessitating the consideration of context ground-truth speech. Therefore, We designed a Context Acoustic Decoder to predict global context mel-spectrograms with the assistance of adjacent ground-truth mel-spectrograms of the current sentence.
|
(1) Gross Pitch Error (GPE) [30]: measures the percentage of frames where the pitch error exceeds 20% and voicing is present in both the synthesized speech and ground-truth speech.
|
D
|
Our results demonstrate that DL models can accurately predict the complex dynamics of wastewater levels in real-world scenarios. Global models, with full access to all sensor readings under normal operation without network outage, exhibit high forecast precision for wastewater levels in the overflow basin. This enhanced precision can significantly aid sewage treatment facilities in effectively redistributing the load of the CSS.
|
In Figure 4, we present an exemplary forecast for a 12-hour horizon into the future. Although exhibiting considerable variability, we observe that the global LSTM model predicts spikes with a higher degree of precision. However, it shows considerable deviations around the mean values near zero. In contrast, the local TFT model struggles to predict sudden changes after longer periods of stagnancy.
|
In contrast, local models perform worse in forecasting precision than global models. The reason could be the heavy concentration of target values around the mean. Due to sudden changes after longer periods of stability, the local models struggle with longer forecasting periods. However, local models can serve as a fallback in the event of a network interruption where exogenous variables become unavailable. Our results indicate that even when all network connections are lost, and only historical sensor readings of an individual sensor are available, adequate forecasts can still be made.
|
Global Model Approach: This approach corresponds to the scenario of normal CSS operation, where all sensors are fully operational, and all data can be transmitted reliably over the network. In this case, all available data, including exogenous variables such as rainfall data, are integrated into a single model for forecasting the relevant target variable. This approach, referred to as the global model, allows the models to leverage additional contextual information to improve forecasting precision.
|
Global vs. Local Model Comparison: We compared global and local model approaches, highlighting their strengths and limitations in sewage overflow forecasting. Our findings indicate that global models generally outperform local models in terms of MSE. However, local models are advantageous in scenarios where exogenous data is unavailable, offering a computationally efficient alternative.
|
B
|
In this paper, we expand on aforementioned studies to show the functioning of MBB-based microgrids in specific operating conditions to decouple low power quality issues such as 3-phase voltage unbalance from rest of the connected systems. We use a dynamic model for the BTB converter that is developed and integrated into a distribution system solver, GridLAB-D, for system level simulations [14]. The use cases of the BTB converter are simulated on a grid-microgrid network, modified from the IEEE 13-node test system. The model considers 3-phase and 1-phase loads and DERs such as solar Photovoltaic (PV), Battery Energy Storage System (BESS), and EVs. We make following key contributions:
|
In this paper, we expand on aforementioned studies to show the functioning of MBB-based microgrids in specific operating conditions to decouple low power quality issues such as 3-phase voltage unbalance from rest of the connected systems. We use a dynamic model for the BTB converter that is developed and integrated into a distribution system solver, GridLAB-D, for system level simulations [14]. The use cases of the BTB converter are simulated on a grid-microgrid network, modified from the IEEE 13-node test system. The model considers 3-phase and 1-phase loads and DERs such as solar Photovoltaic (PV), Battery Energy Storage System (BESS), and EVs. We make following key contributions:
|
This paper is accepted for publication in IEEE IECON 2024, Chicago, IL. The complete copyright version will be available on IEEE Xplore when the conference proceedings are published.
|
Different combinations of these blocks can be made for greenfield and brownfield microgrids to enable advanced control and communication capabilities in the MBB-based microgrids. The paper explores the application of the integrated block with a BTB converter that has control and communication capability and manages the power import and export in the MBB-based microgrid.
|
There are two cases of power quality isolation that are discussed that show that MBB-based microgrids can import or export power without compromising the power quality on the sides where it is important. The data center application discussed has the data center on the microgrid and is sensitive to power quality. The MBB at the point of common coupling helps to import power from an unbalanced grid but ensures it is balanced in the microgrid. Similarly, in the V2G use case, the MBB-based microgrid exports power to the grid for maximum resource utilization. While the microgrid is unbalanced due to unbalanced power generation at the grid edge, the power exported out to the grid is balanced as preferred by operators.
|
B
|
A publicly available myocardial T1 mapping dataset [4, 5] consists of 210 subjects, 134 males and 76 females, aged 57±14plus-or-minus571457\pm 1457 ± 14 years. All subjects were diagnosed with or suspected of having cardiovascular diseases. The imaging was performed on a 1.5T MRI scanner (Philips Achieva) equipped with a 32-channel cardiac coil, using an ECG-triggered, free-breathing, slice-interleaved T1 mapping sequence (STONE) [30]. The acquisition parameters were as follows: field of view (FOV) =360×351absent360351=360\times 351= 360 × 351 [mm2] and voxel size =2.1×2.1×8absent2.12.18=2.1\times 2.1\times 8= 2.1 × 2.1 × 8 [mm3]. Five slices were captured from the base to the apex in the short-axis view for each subject at 11 distinct time points. Additionally, the dataset included manual expert segmentations of the myocardium [4]. The images were resized to 160×160160160160\times 160160 × 160 pixels per time point, and min-max normalization was applied to the entire sequence to standardize the image intensities.
|
Experimental Setup: The objective of this experiment was to evaluate the effectiveness of the MBSS-T1 method in accounting for motion during free-breathing and breath-hold MRI scans using the MOLLI sequence. We utilized the pre-trained nnUNet model trained on the STONE dataset, as described in the previous section. To make the MBSS-T1 model compatible with the MOLLI sequence, we modified the MBSS-T1 model by adjusting the number of heads in the network output from 11 to 8. The performance of three motion-correction methods was compared: (1) without motion correction, (2) Siemens MyoMaps, and (3) MBSS-T1. For Siemens MyoMaps, we used the motion-corrected T1 maps produced by the scanner’s image processing software.
|
We have demonstrated the added value of MBSS-T1 in free-breathing cardiac T1 mapping using a 5-fold experimental setup. This was done on a publicly available free-breathing quantitative dataset of 210 patients [4], acquired using the STONE sequence [30], and an in-house dataset of 19 patients acquired using the MOLLI sequence [24], for both free-breathing and breath-hold scans. Our approach was compared to baseline methods for deep-learning-based image registration [3, 15], as well as the T1 maps produced on the scanner.
|
Data was acquired on a 3T Siemens MRI system (PRISMA, Siemens Healthineers, Erlangen, Germany) equipped with a 32-channel body coil, using an ECG-triggered, free-breathing, slice-interleaved T1 mapping MOLLI sequence (MyoMaps, SIEMENS Healthcare). The acquisition parameters were as follows: field of view (FOV) =306×360absent306360=306\times 360= 306 × 360 [mm2], flip angle of 35°, and an 8 mm gap between slices. Five slices were captured from the base to the apex in the short-axis view for each subject at eight distinct time points. The dataset consists of cardiac images obtained from 19 patients undergoing two scans: one during breath-hold and another during free breathing. Additionally, extra breath-hold and free-breathing scans were performed for five patients for test-retest analysis.
|
A publicly available myocardial T1 mapping dataset [4, 5] consists of 210 subjects, 134 males and 76 females, aged 57±14plus-or-minus571457\pm 1457 ± 14 years. All subjects were diagnosed with or suspected of having cardiovascular diseases. The imaging was performed on a 1.5T MRI scanner (Philips Achieva) equipped with a 32-channel cardiac coil, using an ECG-triggered, free-breathing, slice-interleaved T1 mapping sequence (STONE) [30]. The acquisition parameters were as follows: field of view (FOV) =360×351absent360351=360\times 351= 360 × 351 [mm2] and voxel size =2.1×2.1×8absent2.12.18=2.1\times 2.1\times 8= 2.1 × 2.1 × 8 [mm3]. Five slices were captured from the base to the apex in the short-axis view for each subject at 11 distinct time points. Additionally, the dataset included manual expert segmentations of the myocardium [4]. The images were resized to 160×160160160160\times 160160 × 160 pixels per time point, and min-max normalization was applied to the entire sequence to standardize the image intensities.
|
C
|
To build ubiquitous intelligence at the edge of wireless networks, federated learning (FL) stands out as a promising distributed learning approach due to its privacy-enhancing characteristic [2, 3]. In a wireless FL system, multiple distributed devices communicate with a parameter server (PS) via wireless links for collaborative model training [4, 5]. To enhance communication efficiency of wireless FL, over-the-air computation (AirComp) has emerged as a key technique by exploiting the waveform superposition property of multiple access channels. Specifically, AirComp enables fast aggregation of gradients from distributed devices through non-orthogonal multiple access, aligning with FL’s requirement of averaging local gradients without necessitating access to individual values [6].
|
As a cost-effective physical-layer technology, reconfigurable intelligent surface (RIS) has been extensively studied to support various communication applications due to its capability for smart channel reconstruction [9, 10]. In this paper, we introduce low-cost RIS to achieve statistical interference elimination across different clusters and facilitate simultaneous multi-cluster computation over-the-air, thereby enhancing the efficiency of personalized AirFL.
|
Note that the operation in (1) requires the PS to sum the local gradients of devices in each cluster separately. By applying AirComp, all devices simultaneously upload the analog signals of local gradients to the PS, achieving summation over-the-air. However, the analog nature of AirFL makes the PS cannot distinguish between the gradients of different clusters.
|
In the following, we introduce an RIS-enabled personalized AirFL framework to address this challenge. Each cluster is assisted by an RIS with N𝑁Nitalic_N reflecting elements to help realize the personalized model aggregation. To support simultaneous multi-cluster gradient estimation, at least M𝑀Mitalic_M receiving antennas are required. Without loss of generality, we consider a PS equipped with M𝑀Mitalic_M receiving antennas. Then, the received signal at the PS in the t𝑡titalic_t-th round, 𝐘t=[𝐲1,t,𝐲2,t,⋯,𝐲M,t]H∈ℂM×Dsubscript𝐘𝑡superscriptsubscript𝐲1𝑡subscript𝐲2𝑡⋯subscript𝐲𝑀𝑡𝐻superscriptℂ𝑀𝐷\mathbf{Y}_{t}\!=\!\left[\mathbf{y}_{1,t},\mathbf{y}_{2,t},\cdots\!,\mathbf{y}%
|
Although AirComp-enabled FL (AirFL) offers significant performance gains, it does not address the data heterogeneity in most real-life FL scenarios with non-independent and identically distributed local datasets. Such data heterogeneity hinders the generalization of a single global consensus model. To this end, preliminary works have been made to develop a personalized AirFL framework via clustering algorithms, where different models are trained for different clusters under the orchestration of the PS [7, 8]. However, this personalized framework requires large-scale receiving antennas to combat interference, leading to a significant escalation in hardware cost.
|
D
|
The test results show that, even with highly efficient RNN-based models and in SNR conditions down to -5 dB, updating only 50% of the \pdftooltipGRUGated Recurrent Unit neurons at each step can still achieve the same speech enhancement results.
|
However, this approach results in varying numbers of neurons that require updating in each step, leading to dynamically changing computation loads per step. For practical implementations, the hardware still needs to be capable of doing all the compute for updating of all neurons in order to satisfy the real-time processing constraint. The performance of the threshold based select gate will be discussed in future work.
|
In the future, our aim is to investigate the applicability of these models to other modalities and tasks.
|
However, the computational complexity of this process is 𝒪(J)𝒪𝐽\mathcal{O}(J)caligraphic_O ( italic_J ) [19] and is negligible compared to other costs of updating the \pdftooltipGRUGated Recurrent Unit equations.
|
In addition, these models have been tested primarily on classification tasks, and their performance on regression tasks, such as speech enhancement, remains to be studied.
|
B
|
As presented in Table I, our NEST-L model is able to outperform WavLM-base++ [8] with similar size of parameters on all tasks, and also outperforms WavLM-large [8] that is 3x as large on speaker verification (SV), speaker diarization (SD) and phoneme recognition (PR). When compared with the XEUS [9] model that is trained on 10x data, we can see that our NEST-XL model is still able to achieve better performance on all speaker and content tasks, with especially large improvements on speaker verification, speaker diarization and phoneme recognition. Overall, we are able to achieve new state-of-the-art results on SID, SV, SD, PR and ASR tasks compared with WavLM [8] that has similar data size as well as XEUS [9] that is trained on much large data, demonstrating the effectiveness of NEST when applied on various downstream speech processing tasks.
|
TABLE IV: DER results on speaker diarization. Underline indicates the second best evalutations. Starred(*) systems are not end-to-end systems which involve clustering steps.
|
TABLE III: Results on speech translation from English to German, French and Spanish. BLEU score is used as the metric, while punctuation and capitalization are included in metric calculation. Underline indicates second best performance.
|
We further study how NEST can help speech-to-text translation (AST) and present the results in Table III. We use the same model architecture and training procedure as proposed in Canary [21], while the training data contains 42K hours of English ASR data with machine generated translation [45] from English (En) to German (De), French (Fr) and Spanish (Es) text. We compare our model with other SOTA AST models SeamlessM4T [19] and Canary [21] on Europarl [42], mExpresso [46] and FLEURS [47] test sets. Given the same number of parameters, due to much less training data, there is still a gap between Canary [21] and our model on all evaluated datasets. Also, given that Canary [21] is initialized with a multi-lingual ASR encoder that is pretrained on all of the evaluated languages, it is expected that Canary performs better than the NEST initialization. Nonetheless, our model is able to outperform SeamlessM4T [19] and achieves the second best average BLEU scores on En→→\rightarrow→De, En→→\rightarrow→Es and En→→\rightarrow→Fr translations, showing that the NEST framework is able to help achieve impressive AST performance with less data.
|
TABLE II: Results on multi-lingual ASR with punctuation and capitalization. Performance is evaluated by word error rate (WER) including native punctuation and capitalization from the source datasets.
|
B
|
\mathrm{i}2\mathrm{s}}(I)\right)]caligraphic_L start_POSTSUBSCRIPT roman_sic end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG 2 end_ARG [ roman_H ( bold_italic_y start_POSTSUPERSCRIPT s2i end_POSTSUPERSCRIPT ( italic_S ) , bold_italic_p start_POSTSUPERSCRIPT s2i end_POSTSUPERSCRIPT ( italic_S ) ) + roman_H ( bold_italic_y start_POSTSUPERSCRIPT i2s end_POSTSUPERSCRIPT ( italic_I ) , bold_italic_p start_POSTSUPERSCRIPT i2s end_POSTSUPERSCRIPT ( italic_I ) ) ]
|
Setup. The Hubert model used in our experiments is Hubert-Large, while the BLIP-2 image encoder is ViT-L/14. Both the HuBERT and BLIP-2 parameters are frozen throughout the training process. The speech encoder and the multimodel encoder are both transformer encoders. They both have eight attention heads, and the hidden dimension of these two encoders is the same as that of HuBERT. In all our experiments, we set the momentum coefficient m𝑚mitalic_m to 0.998 and the balancing factor α𝛼\alphaitalic_α to 0.4 for simplicity. The size of the image queue is set differently based on the dataset used for the experiments. The image queue sizes are set to 1024 and 16384 for Flickr8k and SpokenCOCO dataset, respectively. Since the two datasets contain multiple speech for each image, we change the ground-truth label of SIC to consider multiple positives, where each positive has a ground-truth probability of 1/n1𝑛1/n1 / italic_n, where n𝑛nitalic_n is the number of positive samples. During inference, we first compute the feature similarity score ssicsubscript𝑠𝑠𝑖𝑐s_{sic}italic_s start_POSTSUBSCRIPT italic_s italic_i italic_c end_POSTSUBSCRIPT for all speech-image pairs. Then we take the top-k𝑘kitalic_k candidates and calculate their SIM score ssimsubscript𝑠𝑠𝑖𝑚s_{sim}italic_s start_POSTSUBSCRIPT italic_s italic_i italic_m end_POSTSUBSCRIPT for ranking. For the Flickr8k, k𝑘kitalic_k is set to 16, while for the SpokenCOCO dataset, k𝑘kitalic_k is set to 32. All models are trained with Adam optimizer with a weight decay of 10−6superscript10610^{-6}10 start_POSTSUPERSCRIPT - 6 end_POSTSUPERSCRIPT, batch size of 256, and 40k steps in total. The learning rate linearly increases to 10−4superscript10410^{-4}10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT in the first 4k steps and decreases to 10−8superscript10810^{-8}10 start_POSTSUPERSCRIPT - 8 end_POSTSUPERSCRIPT afterward. All experiments are conducted on a machine with 8 32GB V100 GPUs.
|
The speech-image pairs used for training can be noisy, where positive pairs sometimes are weakly correlated. This means that the speech may contain words that are unrelated to the image, or the image may contain entities that are not mentioned in the speech. Furthermore, in speech-image contrastive learning, negative speeches associated with an image may still contain relevant content. However, the one-hot labels for SIC penalize all negative predictions.
|
Figure 1: The HuBERT and speech encoder are utilized to extract speech embeddings. The BLIP-2 image encoder is responsible for extracting image embeddings. The speech and image embeddings are fed into the multimodal encoder for interaction.
|
where HH\mathrm{H}roman_H is cross entropy and 𝒚simsuperscript𝒚sim\boldsymbol{y}^{\mathrm{sim}}bold_italic_y start_POSTSUPERSCRIPT roman_sim end_POSTSUPERSCRIPT is a 2-dimensional one-hot vector representing the ground-truth label. To improve the model’s performance, we propose a strategy to sample hard negatives for the SIM tasks with zero additional computational overhead. A negative speech-image pair is hard if they share similar semantics but differ in fine-grained details. We use the contrastive similarity from Equation 1 which has already been calculated in the SIC tasks to find hard negatives from the image embedding queue. For each speech in a mini-batch, we sample one negative image from the queue. Images with higher contrastive similarities to the speech are more likely to be sampled.
|
B
|
This metric guarantees that if we have observations up to time n𝑛nitalic_n, then we will have informative forecasts up to time n+L𝑛𝐿n+Litalic_n + italic_L, since the forecast distribution will always have minimum distance ϵitalic-ϵ\epsilonitalic_ϵ with the marginal distribution.
|
The second subsection is devoted to analyzing how dynamics of the queue affect its predictability and in the end multi-hop queues are analyzed when their observability is limited.
|
This measure quantifies the chain’s mixing properties: a larger spectral gap implies faster convergence, meaning that the distribution of the chain’s states will quickly approach the stationary distribution.
|
Essentially, it reflects how quickly the observations lose their importance, as the system evolves quickly to its stationary state.
|
As for the marginal distribution, regardless of the observations, it also becomes a mixture model with the same kernels but weights are defined by the Markov chain’s stationary state probabilities
|
C
|
To achieve absolute security through PLS, researchers have proposed using artificial noise (AN) to degrade Eve’s channel quality for wireless communications [4, 5, 6, 7, 8]. By adding the AN to the communication channel, these methods increase Eve’s interference level and prevent her from successfully intercepting the communication. However, a significant limitation of this approach is the rapid attenuation of the AN over distance for wireless propagation. If Eve is located close to signal sources and far from the AN sources, the AN may not sufficiently degrade her channel and it thus leaves the communication vulnerable to eavesdropping.
|
First, the measured frequency response of the proposed system for the generated AN is shown in Fig. 5. In these measurements, a single-frequency sinusoid ranging from 0 Hz to 4 kHz, used to mimic the AN, is generated by the AN generator. We captured the transmitted AN at position x=0𝑥0x=0italic_x = 0 meters and the residual AN after cancellation by the telephone hybrid. Both the transmitted and residual AN are plotted in the figure. The results demonstrate that the hardware prototype achieves significant AN cancellation, with approximately 26 dB of suppression. This high level of cancellation effectively minimizes the AN, which prevents it from interfering with the desired signals. Importantly, the desired signals are preserved with only negligible degradation and it confirms that signal quality is maintained throughout the process. Furthermore, the frequency response of both the transmitted and residual AN remains remarkably consistent and flat up to 4 kHz. This stability indicates that the system performs reliably across this frequency range without introducing significant distortions or variations. It also highlights the effectiveness of the hardware design in maintaining signal fidelity while providing robust AN cancellation.
|
In contrast, wire-line telephone systems experience much lower signals or ANs attenuation over the distance [9]. For example, the AN transmitted by Bob along a twisted differential line may attenuate by only a few decibels per hundred meters. This low attenuation allows Bob to transmit AN that effectively conceals Alice’s messages over distances of up to a thousand meters. This characteristic makes wire-line telephone systems particularly well-suited for PLS techniques that use AN.
|
Lastly, Fig. 7 illustrates the relationship between secrecy capacity and AN power under a measured AN cancellation capability of 26 dB. The secrecy capacity is computed using (8) for wire-line lengths of 100, 200, and 300 meters. From the figure, it is evident that as the wire-line length increases, the secrecy capacity decreases. This is due to the greater signal attenuation over longer distances, which reduces the SINR advantage for the legitimate receiver and makes it easier for eavesdroppers to intercept the signal. Moreover, the figure shows that increasing the power of AN significantly enhances the system’s secrecy capacity. This improvement occurs because stronger AN creates more interference for Eve, effectively masking the legitimate signal and making it more difficult for unauthorized parties to extract useful information. As a result, the system’s overall security is bolstered. These findings emphasize the importance of balancing wire-line length and AN power to optimize secrecy capacity. In scenarios with longer wire-line distances, increasing AN power becomes particularly crucial to maintaining a high level of security in the communication system.
|
As can be seen from (9), the AN cancellation capability β𝛽\betaitalic_β can be significantly smaller than the wire-line’s power attenuation factor α(L)𝛼𝐿\alpha(L)italic_α ( italic_L ). Therefore, the inequality is easily satisfied in practice. It is also important to note that (9) establishes the necessary conditions for ensuring absolutely secure transmission in the wire-line communication system. Clearly, as the length of the wire-line increases, there is a corresponding need for greater AN cancellation capability.
|
B
|
The results, shown in Table 3, indicate that the supervised model trained on the Flickr8k training sets is significantly outperformed by the model trained on the SpokenCOCO training sets. This highlights the excellent generalization ability of our model. The superior performance can be attributed to the model being trained on the larger SpokenCOCO dataset compared to the smaller Flickr8k dataset, demonstrating the model’s scalability.
|
Table 4 studies the effect of CMD on cross-modal retrieval. In comparison to training without CMD, the inclusion of the CMD training task resulted in a 2.3% improvement on the Flickr8k dataset and a 1.9% improvement on the SpokenCOCO dataset, indicating the effectiveness of CMD.
|
The cross-modal retrieval performance of our method is presented in Table 2. In comparison to previous methods, we have achieved the best retrieval performance in both speech-to-image retrieval and image-to-speech retrieval tests. Our model has shown significant improvements over the previous best model [16], with increases of 2.0% in mean R@1, 2.4% in mean R@5, and 1.9% in mean R@10 on the Flickr8k dataset. Besides, our model has demonstrated improvements of 1.7% in mean R@1, 0.7% in mean R@5, and 0.7% in mean R@10 on the SpokenCOCO dataset. These improvements can be primarily attributed to our model’s ability, achieved through joint training with contrastive learning and CMD tasks, to identify shared semantics between images and speech while also capturing their subtle differences.
|
The results, shown in Table 3, indicate that the supervised model trained on the Flickr8k training sets is significantly outperformed by the model trained on the SpokenCOCO training sets. This highlights the excellent generalization ability of our model. The superior performance can be attributed to the model being trained on the larger SpokenCOCO dataset compared to the smaller Flickr8k dataset, demonstrating the model’s scalability.
|
Our framework has exhibited a significant improvement of 2.0% in mean R@1 on the benchmark dataset of Flickr8k Audio Capitions Coupus and 1.7% in mean R@1 on the SpokenCOCO dataset, surpassing the performance of the current state-of-the-art approach.
|
A
|
This study is organized around two primary objectives, with the first objective concentrating on the classification of HER2 Positive versus HER2 Negative cases. For this task, we exclusively utilized the TCGA-Yale HER2 cohort dataset. A total of 120 whole slide images (WSIs) were carefully selected and subjected to a self-supervised learning approach using the MoCo-v2 framework. The core of this process involved training a ResNet50 encoder, which was rigorously trained over the course of 300 epochs to ensure a robust and reliable feature extraction process. Following this initial stage, the extracted features were used to train an attention module, a critical component of our classification pipeline, which was trained over an additional 100 epochs. To ensure the generalizability and reliability of our model, the dataset was systematically divided into four distinct folds. Each fold consisted of 160 slides for training purposes and 22 slides reserved for testing, allowing for a thorough evaluation of the model’s performance across different subsets of the data. During the training process, we experimented with a range of initial learning rates—specifically 1e-3, 1e-4, and 1e-5—alongside different weight decay values, also tested at 1e-3 and 1e-5. After carefully evaluating the results from all these permutations, we identified the optimal hyperparameter settings: an initial learning rate of 1e-3 coupled with a weight decay of 1e-5. These settings provided the best balance between model complexity and performance. The performance of the model was measured using the Area Under the Curve (AUC) metric, and the mean AUC achieved across the four folds was calculated to be 0.85 ± 0.02, indicating strong and consistent performance. To further illustrate the effectiveness of our approach, the confusion matrix and the Receiver Operating Characteristic (ROC) curve for the first fold are presented in Figure 4. These plots show the model’s ability to accurately distinguish between HER2 Positive and HER2 Negative cases, reflecting the overall success of our classification approach.
|
The second major task in this study focused on classifying slides with a HER2 score of 2+, a particularly challenging category due to the ambiguity in traditional testing methods. This task involved differentiating between HER2 FISH positive and HER2 FISH negative cases, where HER2 status determination is only possible through fluorescence in situ hybridization (FISH) testing, a critical step for accurate diagnosis and treatment planning. We utilized data from both the TCGA-Yale HER2 cohort and the TCGA-BRCA cohort for this task. The ResNet50 encoder, previously trained on 120 slides in the first task, was reused to ensure consistency and leverage the learned features. For the attention module, training was conducted with 160 slides from the TCGA-Yale HER2 cohort over 100 epochs, while 44 slides from the TCGA-BRCA cohort were reserved for testing. The optimal hyperparameters from the first task—an initial learning rate of 1e-3 and a weight decay of 1e-5—were maintained. During inference on the TCGA-BRCA cohort, the model achieved an Area Under the Curve (AUC) of 0.81, demonstrating strong performance in distinguishing between HER2 FISH positive and HER2 FISH negative cases within the challenging 2+ score category. The confusion matrix and ROC curve for this task, shown in Figure 5, provide valuable insights into the model’s decision-making accuracy.
|
Figures 6 and 7 provide a detailed visualization of two whole slide images from the TCGA-BRCA dataset, each from HER2 FISH positive and FISH negative classes. For each of these slides, we generated two distinct heatmaps, corresponding respectively to the FISH positive and FISH negative classes. These heatmaps serve as visual tools to highlight the regions of the slide that the model identified as being most relevant for classification. In Figure 6, we focus on the HER2 FISH positive slide. The heatmaps generated for this slide reveal that the tumor regions are consistently given greater attention compared to the non-tumor areas, which underscores the model’s ability to focus on clinically significant regions. Particularly, the heatmap for the FISH Positive class shows a more intense red coloration in the tumor areas compared to the Fish Negative heatmap. This vivid red coloring in the FISH Positive heatmap strongly indicates that the model has made an accurate prediction regarding the HER2 status of the entire slide.
|
This study is organized around two primary objectives, with the first objective concentrating on the classification of HER2 Positive versus HER2 Negative cases. For this task, we exclusively utilized the TCGA-Yale HER2 cohort dataset. A total of 120 whole slide images (WSIs) were carefully selected and subjected to a self-supervised learning approach using the MoCo-v2 framework. The core of this process involved training a ResNet50 encoder, which was rigorously trained over the course of 300 epochs to ensure a robust and reliable feature extraction process. Following this initial stage, the extracted features were used to train an attention module, a critical component of our classification pipeline, which was trained over an additional 100 epochs. To ensure the generalizability and reliability of our model, the dataset was systematically divided into four distinct folds. Each fold consisted of 160 slides for training purposes and 22 slides reserved for testing, allowing for a thorough evaluation of the model’s performance across different subsets of the data. During the training process, we experimented with a range of initial learning rates—specifically 1e-3, 1e-4, and 1e-5—alongside different weight decay values, also tested at 1e-3 and 1e-5. After carefully evaluating the results from all these permutations, we identified the optimal hyperparameter settings: an initial learning rate of 1e-3 coupled with a weight decay of 1e-5. These settings provided the best balance between model complexity and performance. The performance of the model was measured using the Area Under the Curve (AUC) metric, and the mean AUC achieved across the four folds was calculated to be 0.85 ± 0.02, indicating strong and consistent performance. To further illustrate the effectiveness of our approach, the confusion matrix and the Receiver Operating Characteristic (ROC) curve for the first fold are presented in Figure 4. These plots show the model’s ability to accurately distinguish between HER2 Positive and HER2 Negative cases, reflecting the overall success of our classification approach.
|
We developed a customized weak supervision classification technique, combined with MoCo-v2 contrastive learning, to differentiate between HER2 positive and HER2 negative breast tumors using H&E stained sections. The training pipeline consists of three steps: extracting patches from whole slide images, using a ResNet50 encoder pre-trained with MoCo-V2 self-supervision, and training the final attention module. The TCGA-Yale dataset was employed for both training and testing, resulting in an AUC of 0.85 ± 0.02 across four different folds. Additionally, we evaluated our model on 44 H&E slides from the TCGA-BRCA dataset, all of which had a HER2 score of 2+ and included corresponding HER2 status and FISH test results. These cases are considered equivocal, often necessitating a costly FISH test for clarification. Our pipeline achieved an AUC of 0.81 on these challenging H&E slides.
|
A
|
We observe a trade-off between robust generalization and discriminability through the generalization bound.
|
Recent works have focused on proving the generalization bounds of GNNs without any dependence on the underlying model responsible for generating the graph data [46, 17, 51]. Generalization analysis on graph classification is studied in a series of works when graphs are drawn from random limit models [44, 37, 35, 28]. In [53], the authors study the generalization of GNNs over graphs generated from an underlying manifold on both node and graph levels. These works assume that the training and testing graphs are generated from the same underlying model. In practice, there are inevitable scenarios with generative model mismatch between testing and training graphs [30]. Hence, it is of crucial to demonstrate the generalization ability of GNNs remains robust to generative model mismatch. This would provide a promising assurance that GNNs can maintain outstanding generalizable performance even in noisy environments.
|
Existing works on the in-distribution generalization of GNNs fall in node and graph level tasks. For node classification tasks of GNNs, there are works providing a generalization bound of GNNs based on a Vapnik-Chervonenkis dimension [46], algorithmic stability analysis [51, 65], PAC-Bayesian analysis [34] and Rademacher complexity [12]. For graph classification tasks of GNNs, the authors prove the generalization bound via Rademacher complexity [17] and PAC-Bayes analysis [31, 21]. The authors consider a continuous graph limit model to analyze the generalization of GNNs on graph classification
|
in [37, 35, 28]. In [54], the authors prove the generalization of GNNs on graphs sampled from a manifold both for node and graph classification tasks. These works are considered only in the in-distribution case where the training and testing data are sampled from the same distribution.
|
The authors in [49] propose a domain generalization framework for node-level tasks on graphs to address distribution shifts in node attribute distribution and graphic topology. In [13], the authors study the out-of-distribution generalization of GNNs on graph-level tasks with a causal representation learning framework. In [30] the authors handle graph distribution shifts in complex
|
B
|
Table 1: A/D/V model’s resources at 32bit non quantized .onnx. All onnx take 16KHz audio as input and include all parameters, preprocessing and LogMel Spectrogram extraction. Values from Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz.
|
Hence Wav2Small is a potential replacement of the expensive input audio extractor of Transformer architectures, such as
|
The paradigm of a VGG feature extractor followed by transformer layers has shown great performance for Speech
|
as inexpensive feature extractor for large transformer architectures, of the Wav2Vec2/WavLM family. Nonetheless
|
A need for SER on low resource hardware, drives us to investigate small architectures for A/D/V. Neural Architecture Search has birthed useful architectures for categorical emotions SER [2]. However, Sota A/D/V is dominated by Transformer models, such as Wav2Vec2.0 [1] or WavLM [3], both having a VGG7 input feature extractor. This VGG architecture is chosen because the skip connection of ResNet has high RAM footprint.
|
D
|
,\dots,N.italic_H start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( bold_italic_g start_POSTSUBSCRIPT u end_POSTSUBSCRIPT ( bold_x ) , bold_p ) ≤ 0 , italic_n = 1 , 2 , … , italic_N .
|
by ignoring the LOS constraint 𝐱∈𝒟~𝐱~𝒟\mathbf{x}\in\text{$\tilde{\mathcal{D}}$}bold_x ∈ over~ start_ARG caligraphic_D end_ARG.
|
where the position 𝐱(t)𝐱𝑡\mathbf{x}(t)bold_x ( italic_t ) is projected back to 𝒮𝒮\mathcal{S}caligraphic_S
|
To investigate the property of the trajectory 𝐱(t)∈𝒮𝐱𝑡𝒮\text{$\mathbf{x}$}(t)\in\mathcal{S}bold_x ( italic_t ) ∈ caligraphic_S,
|
It is very challenging to handle the constraint 𝐱∈𝒮𝐱𝒮\mathbf{x}\in\mathcal{S}bold_x ∈ caligraphic_S,
|
D
|
Piano roll: The piano roll serves as a historical symbolic representation of music, dating back to the era of player pianos. These self-playing instruments used piano rolls—continuous paper rolls with punched perforations—to automatically perform music. The perforations on the roll, representing note control data, trigger the playing of notes as they pass over a tracker bar. Player pianos were capable of capturing and reproducing the performances of renowned pianists, encoding not only the pitch and duration of notes but also the dynamics of the performance. In modern music technology, the piano roll has evolved into a geometric visualisation that is used for music analysis and generation. This representation plots time on the horizontal axis and pitch on the vertical axis, with each note depicted as an axis-parallel rectangle that encodes onset time, pitch, and duration. Such a two-dimensional representation is particularly compatible with diffusion-based models, and it has been applied to tasks like transcription [CSU+23] and generation [MJXZ23, LQZ+24a].
|
Notes graph: Graphs emerge as a natural representation of symbolic music since music exhibits innate structures like voices and chords that can be formed into a graph with musical heuristics. [JKKN19] introduced a novel approach to representing music scores as graphs, where each note forms a node and various musical relationships between notes are depicted as edges. This graph-based representation utilises six primary types of edges—next, rest, set (onset), sustain, voice, and slur—to capture the intricate connections within the score.
|
To generate background music corresponding with the body movements of musicians in video clips, Foley Music [GHC+20b] utilises a Graph-Transformer architecture, which includes a Graph Convolutional Network (GCN) encoder and a Transformer decoder. It learns to map the relationship between human body key points detected in videos and MIDI events.
|
The pitch (class) and octave (height) of a note are indicated by the vertical position of the note within, below, or above the staff. Notes have different durations or note lengths, represented by whole notes, half notes, quarter notes, eighth notes etc. Each note length has a specific duration relative to the beat defined by the meter. Roughly speaking, rhythms and melodies are made up of notes with different durations.
|
The model distinguishes forward and backward directions, along with a unique self-connection for each note, culminating in a total of 12 edge types for a comprehensive and detailed musical score representation. Similarly, [KFW23] also created a heterogeneous graph from score notes and tackles the voice separation problem as graph link prediction in multi-trajectory tracking. [ZKD+23] further examined the potential of applying score or performance graphs with various edge designs on music understanding problems and compared them with other representations counterparts.
|
A
|
The rest of the datasets can be sectioned into the remaining four categories. The image-caption/image-text pair category (dataset from
|
The next category, which is VQA (PathMMU [61], Quilt-VQA [56], PathVQA [73]) is similar to the previous category as it also contains close-ended and open-ended question-answer pairs, but the associated images are not WSIs but rather low and medium-quality images. Among these datasets, PathVQA is the first research to curate a pathology-specific VQA dataset. PathMMU is the latest and largest dataset in this category and it also provides explainability annotations with each answer.
|
The rest of the datasets can be sectioned into the remaining four categories. The image-caption/image-text pair category (dataset from
|
This data set was supplemented by pathology-specific data from the large-scale artificial intelligence open network (LAION) data repository [65]. Pathology textbooks and Atlas are also large knowledge sources that can be used to extract image caption/text pairs. In a couple of recent research [52, 56], educational histopathology videos on YouTube are being used as the source of pathology image and text pair. However, curation of this kind of dataset requires a series of hand-crafted algorithms and many external tools.
|
MI-Zero [69], ARCH [70]) involves a low or medium-quality image and an associated piece of text for that image. This text can be a short caption with a description of the image or a more elaborate description. ARCH is the earliest dataset in this category that utilized PubMed and pathology textbooks to extract the texts. PathGen-1.5M is the latest dataset in this category, but unlike other datasets in this category, the images are patches extracted from WSIs.
|
D
|
We used the synthetic dataset Urban-SED Salamon et al. (2017) and the MAESTRO-Real Martín-Morató et al. (2023), a real-life dataset annotated with soft labels, to construct datasets with noisy labels.
|
The remainder of this paper is structured as follows: Section 2 describes the different types of label noise encountered in Sound Event Detection (SED). Section 3 then introduces the methods for generating noisy labels using both synthetic and real-life datasets. Section 4 details the experimental setup and describes the noise-robust functions evaluated. Finally, Section 5 presents the results of these experiments, along with theoretical analysis and discussion of their implications and significance.
|
For insertion noise, we randomly inserted additional sound event instances into the original URBAN-SED annotations, which were controlled by an insertion_rate𝑖𝑛𝑠𝑒𝑟𝑡𝑖𝑜𝑛_𝑟𝑎𝑡𝑒insertion\_rateitalic_i italic_n italic_s italic_e italic_r italic_t italic_i italic_o italic_n _ italic_r italic_a italic_t italic_e (ranging from 0 to 1.0, with increments of 0.1), to determine the proportion of event instances added to each class. The inserted events’ onset times were randomly set between 0 and 10 seconds, and their durations were aligned with the mean and standard deviation values of the respective event class in the original dataset.
|
To our knowledge, this study is the first to systematically examine the impact of various types of noisy labels on Sound Event Detection (SED) tasks. We provide practical methods for constructing datasets with noisy labels in both synthetic and real-life settings to assess their impact. We discovered that deletion label noise adversely impacts the system more than insertion label noise, as demonstrated by both experimental results and theoretical analysis.
|
URBAN-SED comprises 10 sound event classes, with each audio sample having a duration of 10 seconds. As outlined in Section 2, we explored four label noise types for our experiments: deletion, insertion, substitution, and subjective noisy labels. To create these noisy labels, we modified the original URBAN-SED label files at the sound event instance level, as follows:
|
D
|
Tree search-based sphere decoders (SD) are promising to achieve the optimal hard Maximum Likelihood (ML) [1] performance and Max-Log optimal soft detection performance [5] in the MIMO uplink. All SD schemes consist of a channel matrix-dependent preprocessing stage and a per-received vector post-processing stage. The channel matrix-dependent preprocessing stage involves a triangular (e.g., QR) decomposition and is only required to be performed once the channel changes significantly, similar to the inversion of linear detection, and with similar complexity requirements. However, the complexity requirements of the per-received vector post-processing stage in SD schemes are many orders of magnitude higher than that of linear processing [14, 7, 6].
|
With the popularity of the open-RAN paradigm, power-efficient physical layer solutions that can enhance network performance are timely and necessary. Such solutions are required to meet the stringent latency requirements of the 3GPP physical layer, even in a softwarerized implementation. Therefore, a non-linear detection scheme that can substantially reduce the power consumption of a base station with ultra-low complexity requirements becomes an ideal candidate for modern physical layer developments. A practical non-linear detection scheme must ideally have a fixed latency and complexity and be capable of delivering substantial gains compared to linear detection, with a very small complexity increase. A comparable small complexity increase (e.g., <2×<2\times< 2 ×) can enable the exchange of linear detection with non-linear detection in exiting deployments without significant modifications to the architecture and without compromising the supported bandwidths and the number of user streams.
|
In contrast, non-linear detection can overcome the limitations imposed by linear detection and provide substantial throughput and connectivity gains compared to linear detection [4, 3]. Furthermore, in Section V, we elaborate that non-linear detection can deliver better throughput than linear detection while significantly reducing the number of base station antennas and, therefore, RF chains. As a result, the power consumption of a base station can be reduced substantially by employing non-linear processing. To achieve these gains, non-linear processing schemes that can accurately compute soft information are necessary, leveraging channel decoding schemes employed in current standard-based systems.
|
In this work, we introduce Detection and Approximate Reliability Estimation (DARE):, a novel highly-efficient ultra-low-complexity non-linear detection scheme. DARE can achieve near-optimal hard ML and soft detection performance [5] with a time complexity order of O(MK)𝑂𝑀𝐾O(MK)italic_O ( italic_M italic_K ) per received vector sample. To enable this, DARE exploits a novel detector structure to provide soft bit reliability information based on the region of the received observable (Section IV). Consequently, DARE can approximate the optimal soft information computation with lower complexity than exiting non-linear detectors that provide hard estimates. DARE can efficiently quantize the reliability information as a function of complexity, providing a flexible performance/complexity tradeoff. Furthermore, DARE can compute reliability information in a hardware-friendly manner while avoiding any sorting operations, which is a bottleneck for existing non-linear detectors [15, 16]. In contrast to the Antipodal approach, DARE applies to a smaller number of streams, determines the reliability of bits per user basis (and does not characterize the whole vector), and has a fixed processing latency. As a result, for the first time, DARE can significantly outperform linear soft detection (e.g., throughput gains of 40% even in massive MIMO scenarios) with a maximum complexity that is only 2×2\times2 × than linear detection (Section V). Furthermore, DARE can provide better throughput than linear MMSE while employing half the base station antennas, resulting in power savings of 500500500500W [17] for a 64646464-antenna base station.
|
The recently introduced massively parallelizable non-linear (MPNL) detection scheme has been shown to be efficient [3] in approaching optimal performance and capable of outperforming state-of-the-art detectors. MPNL detection scheme can minimize processing latency while achieving ML performance by dividing the detection process into parallel processes that do not interact. In contrast, we exploit dependencies in this work to maximize performance gains specifically for a smaller complexity.
|
D
|
To achieve the objective of accurate trajectory tracking in the presence of disturbances, a cascade control strategy is proposed, which is composed of two components: an FxTDO-based MPC controller and an INDI angular velocity controller. The FxTDO-based MPC controller is proposed to track the desired trajectory while simultaneously compensating for the lumped disturbance 𝒇dsubscript𝒇𝑑\bm{f}_{d}bold_italic_f start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT. In addition, the INDI angular velocity controller is developed to track the commands generated by the FxTDO-based MPC controller and deal with unknown torque disturbance 𝝉dsubscript𝝉𝑑\bm{\tau}_{d}bold_italic_τ start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT.
|
Motivations: In summary, the quadrotor is a strongly coupled and underactuated nonlinear system. It is challenging to achieve satisfactory performance using linear control methods for quadrotor [5, 6]. In contrast, model predictive control (MPC), as a nonlinear optimization method, can effectively address the coupled multivariable nonlinear control problems by considering prediction model constraint. However, the success of MPC depends on the availability of a highly accurate prediction model [17, 18, 19, 20, 23]. Therefore, the rapid provision of such an accurate model is an urgent problem to be solved [18, 19]. Motivated by these observations, a novel fixed-time disturbance observer (FxTDO) is proposed to accurately estimate the lumped disturbances with fast convergence. Subsequently, by integrating the estimation from the FxTDO with the nominal model, the FxTDO-based MPC (FxTDO-MPC) algorithm is developed to achieve robust trajectory tracking of quadrotor in the presence of disturbances. Fig. 1 shows the trajectory of quadrotor utilizing the proposed FxTDO-MPC algorithm in real-world experiments. The main contributions of this paper can be summarized in the following two aspects.
|
By integrating the estimation of FxTDO into the prediction model, an FxTDO-based MPC controller is developed to achieve robust trajectory tracking of quadrotor. The prediction model employed within the MPC problem is formulated by
|
By integrating the disturbance observations into the prediction model within the MPC framework, an FxTDO-MPC algorithm is developed to achieve robust trajectory tracking of quadrotor. Simulations and real-world experiments are presented to validate the effectiveness of the proposed algorithm.
|
This paper proposes an FxTDO-MPC algorithm for robust trajectory tracking of quadrotor in the presence of disturbances. Firstly, the FxTDO is introduced to estimate the lumped disturbances. The convergence of estimation error within a fixed convergence time is guaranteed by the bi-limit homogeneity and Lyapunov techniques. Then, the observer-based model predictive controller is formulated by integrating the estimation of FxTDO into the prediction model. The proposed method achieves accurate trajectory tracking and robust disturbance rejection of quadrotor, while simulations and real-world experiments are developed to evaluate the effectiveness of the proposed method. Future work will focus on extending the proposed method to various vehicle models, such as fixed-wing UAVs, and to diverse operational environments, such as scenarios involving actuator failures.
|
B
|
Finally, the Feature Extraction uses a latent model, the Principle Component Analysis (PCA), to reduce the dimensionality of the data. The PCA applies a linear transformation to the data based on the eigenvectors of the covariance matrix of the data. The eigenvectors with the highest eigenvalues are then used to transform the data [10]. The PCA is applied to two different datasets, to the modified dataset by EA and the raw dataset, which is shown in Fig. 4. The result gives three principle components, due to the setting that the accumulated explained variance ratio need to exceed 95%percent9595\%95 %. The explained variance ratio is a metric of how much variance each principle component contains regarding the original database. The first principle component exceeds the other two principle components far for both databases. The deviation of each principle component, compared by the two datasets, is quite low, but will have a greater effect in generalization, as later shown.
|
The purpose of the CDF is to detect anomalies in time series of the pump current, but the performance metrics of the training and test dataset are quantified measurements for shuffled datapoints. In order to evaluate the performance of the algorithms on the time series, we introduce the change–point detection (CPD) as a metric. The CPD is defined as the minimal change in the pump current which is detected by the algorithms with EA + PCA, shown in Tab. III. The value I0subscript𝐼0I_{0}italic_I start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT denotes the nominal pump current and I𝐼Iitalic_I the actual pump current. Therefore I/I0𝐼subscript𝐼0I/I_{0}italic_I / italic_I start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT can be interpreted as drift. The results show that the C–Means algorithm is able to detect a drift of 8.1 % in the pump current. The Probabilistic and Possibilistic algorithms are able to detect a drift of 5.9 % and 4.9 % in the pump current, respectively. Therefore the PossCP enables the CPD significantly earlier than the predefined thresholds, which are typically set to 10 %, especially for arbitrary operating conditions.
|
The learning behavior of the algorithms under different settings for the CDF is illustrated in Fig. 5. First of all, it is noticeable that the convergence criterium is reached within 30303030 iterations, respectively. Nonetheless, the classical FCM tends to soar in the first iterations for all given CDF configurations, which implies trouble in finding the best gradient for the provided dataset. In contrast, the ProbCP reduces the soaring and provides a more robust behavior for the CDF configuration with enabled EA and EA + PCA. Finally, the PossCP eliminates the soaring completely and resembles the most robust learning behavior for all three clustering algorithms and is independent over all CDF configurations.
|
Second, the performance of the algorithms is evaluated by using the data after the feature selection with EA. The results are shown in Tab. I. The EA slightly improves the performance of the algorithms on the training and test dataset. Nonetheless, the performance of the algorithms on the test dataset with 86.3 %, 78.7 % and 74.3 % is insufficient.
|
Finally, the performance is determined for a combination of EA and PCA which is shown in Tab. I. The EA can improve the performance of the algorithms with PCA up to 3.0 % in terms of the training dataset and up to 1.4 % in terms of the test dataset. This is due to a change in the principle components, which are generated by the transformation and thus produce better learnability. Therefore the combination of the fuzzy clustering algorithms with the EA and PCA is the best performing setup for the CDF.
|
B
|
Word-missing: This pertains to the omission of certain words. The randomly chosen missing word is replaced with a silence of equivalent duration
|
(i) Dysfluency injection: We first convert ground truth reference text of VCTK text [11] into IPA sequences via the VITS phonemizer [12], then add different types of dysfluencies at the phoneme level according to the TTS rules.
|
In this work, we tackle dysfluency modeling from a totally different perspective. Still following the dysfluency modeling criterion [1], we develop an end-to-end model which directly predicts dysfluencies and time regions from dysfluent speech and reference text input without any handcraft templates. To create training data, we introduce VCTK-TTS (7X larger than VCTK [11]), a synthetic dysfluency dataset created using the VITS [12], including repetition, missing, block, replacement, and prolongation at both the phoneme & word levels. VCTK-TTS offers a more natural representation of speech dysfluencies compared to VCTK++ [1], and the creation process is automated. In addition, we extend VCTK++ by incorporating word-level dysfluency and obtain a new dataset named VCTK-Stutter (5X larger than VCTK), thus achieving word-phoneme detection. Our newly proposed datasets have the potential to set a standard benchmark for studies in this field. For the dysfluency detection task, we drew inspiration from the YOLO [13] and devised a region-wise prediction scheme that captures both spatial and temporal information. We developed YOLO-Stutter, which takes soft speech-text alignments [12] as input, followed by a spatial feature aggregator and a temporal dependency extractor to directly predict dysfluency types and time regions. We also collaborated with clinical partners and obtained data from 38 Aphasia subjects. Results on simulated speech, public corpora, and Aphasia speech indicate that YOLO-Stutter achieves state-of-the-art performance even with a minimum number of trainable parameters.
|
We designed TTS rules and injected phoneme and word dysfluencies in text space. VITS [12] was used to generate naturalistic dysfluent speech. This dataset is named VCTK-TTS.
|
Traditional rule-based simulation methods [14, 1, 15] operate in acoustic space, and the generated samples are not naturalistic. We developed a new pipeline that simulates in text space. To achieve this, we first convert a sentence into an IPA phoneme sequence. Then, we develop TTS rules for phoneme editing to simulate dysfluency. These rules are applied to the entire VCTK dataset [12], allowing the voice of generated speech to vary from the 109 speakers included in the VCTK, thus enhancing the scalability of the dataset. We call this VITS-based Simulation. The entire pipeline is detailed in Sec.2.2.1, and TTS rules are discussed in Sec.2.2.2.
|
D
|
Trigger Evaluation. Trigger evaluation includes MOS and SER Accuracy, which evaluate whether the poisoned speech samples maintain normal quality. Average MOS is subjective, and SER Accuracy is the objective evaluation by DNN. In the subjective experiment, 10 individuals were invited to participate in an auditory assessment. Each person randomly listened to 30 poisoned samples and the corresponding clean speech samples. They were asked to judge whether the two sentences expressed the same content and whether they sounded normal and gave scores of 0-5. In objective evaluation, we used a SER model to calculate the SER Accuracy on the poisoning test dataset. Specifically, SER Accuracy includes Micro-F1 and Macro-F1 scores. The final results of the evaluation are shown in Table II.
|
Based on this, we propose that emotion, a sophisticated composite component of speech formed by rhythm, prosody, and intonation, can also serve as the attack object for speech backdoor attacks. We propose a simple but effective speech backdoor trigger, an emotional voice conversion(EVC) model. The model converts the emotion of speech while preserving other speech components unchanged. We conducted speech backdoor attack experiments on two speech classification tasks, the keyword spotting(KWS) and the speaker verification system(SVs). The victim models were trained on poisoned samples and benign samples, noting that the two sample groups belong to different emotional domains. The results demonstrated that our method shows excellent attack effectiveness and stealthiness on both tasks.
|
The experimental results in Table II show that our method and VSVC almost do not damage the quality of the speech, so the MOS value is close to the MOS value of ground truth speech. However, methods BadNets and PBSM made detrimental modifications to the spectrogram and fundamental frequency of the speech, resulting in a deterioration in speech quality. Thus, the MOS value is lower than the MOS value of ground truth speech. In the objective experiment, in the clean test dataset, utterances of neutral emotion were converted to ones of other emotion(mostly angry or happy), and utterances of non-neutral emotion were converted to ones of neutral emotion. The F1 values show that the performance of the EVC trigger aligns with the expected effects of the pre-trained model for emotion speech recognition. We also show different speech triggers in Figure 3. The EmoAttack trigger’s spectrogram can keep lossless.
|
where the x𝑥xitalic_x is the speech of source emotion, x′superscript𝑥′x^{\prime}italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is the converted speech, and the etsubscript𝑒𝑡e_{t}italic_e start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is the target emotion category or speech.
|
This paper analyzes the differences between backdoor attacks in the domains of images and speech. We proposed EmoAttack, a backdoor attack method based on emotional voice conversion. This method preserves speech’s linguistic content and timbre characteristics while modifying a higher-level attribute of speech: emotion. After EmoAttack training, the emotional utterances can lead the victim model to produce wrong predictions. We conducted backdoor attack experiments on two speech classification tasks. The experimental results demonstrate the excellent attack effectiveness of the EmoAttack. Additionally, we verified that different emotions as target labels result in varying efficiency of the trigger. The intense emotions gain better results. The proposed method aims to provide insights into backdoor attacks in the speech domain.
|
B
|
While there is an ongoing search for alternative metrics, the WER remains the commonly used one. We propose an extended Levenshtein distance algorithm that allows the computation of a robust WER while preserving punctuation and capitalisation. We further utilise commonly used algorithms in the field of natural language processing (NLP) to classify transcription errors more granularly. We see the following applications of our approach for future research:
|
The standard metric to report accuracy in ASR research is the Word Error Rate (WER) [3, 4, 5]. It represents the average number of transcription errors per 100 words. The underlying algorithm is the Levenshtein distance, which calculates the minimum number of operations (insertions, deletions, or substitutions) needed to transform one string into another [6]. The WER is computed by determining the minimum edit distance on a word level to measure the number of modifications between an ASR generated hypothesis transcript and a manually created ”error-free” reference. Alternatively, for languages that do not use spaces between words (e.g. Chinese), the Character Error Rate (CER) is used.
|
where C𝐶Citalic_C is the number of correct tokens, S𝑆Sitalic_S the number of substitutions (mismatching tokens), D𝐷Ditalic_D the number of deletions (missing tokens), I𝐼Iitalic_I the number of insertions (incorrect predictions).
|
Figure 3: An interactive web application visualises text differences, error types, and normalisations and calculates several error metrics like WER, SER, and F1-scores.
|
Calculation of individual or combined metrics based on error types (e.g. word, punctuation, capitalisation, number, …).
|
D
|
Initially, the audio signal undergoes a Short-Time Fourier Transform (STFT) with a hop length of 0.016 seconds and a window length of 0.021 seconds, utilizing the Hanning window function. Subsequently, the magnitude spectrum is derived by taking the absolute value of the STFT coefficients. Next, the magnitude spectrum is integrated across the 120Hz-8000Hz frequency band to yield a univariate energy sequence. The ratio of the current frame’s energy to that of the preceding frame is computed, and its logarithm is taken to generate a sequence representing the rate of energy change. This sequence is then smoothed using a Butterworth low-pass filter. Peaks in the energy change rate are identified by applying a threshold of 100 to this sequence. For each detected peak in energy change rate, the corresponding energy peak is located within a window of 5 frames centred on the peak. Ultimately, the cough onset is identified as occurring two frames before the located energy peak positions.
|
Along with the release of large-scale cough data sets including COUGHVID [33] which contains over 25,000 crowdsourced cough recordings and Coswara [34] which contains more than 7,000 audio samples from around 1,000 participants for COVID-19 diagnosis. Attempts have also been made using modern deep-learning models for cough classification. Approaches falling into this category preprocess the raw audio data into log-mel spectrogram image data so that they can be fed into deep models originally designed for image classification. Xue et al. [30] propose a novel self-supervised learning framework for COVID-19 cough classification. A vision transformer (ViT) is firstly trained on unlabeled cough data in a self-supervised learning manner and the pre-trained model is subsequently fine-tuned on the downstream classification task for COVID-19 screening. Valdes et al. [35] also employ a ViT-based model Audio Spectrogram Transformer (AST) [17] for cough signal feature extraction towards the classification cough types (e.g., dry, wet, whooping, etc.). The employed AST was pre-trained on a large-scale image dataset ImageNet [36] and a large-scale audio dataset AudioSet [12] subsequently. However, features extracted from the pre-trained models are directly used in the downstream task. We believe the data distribution gap between general audio data (e.g., those in AudioSet) and cough data will restrict the capabilities of deep models without proper transfer learning. To address this limitation, in this study, we further fine-tune the ViT-based deep models on cough data to enhance their representation learning from cough data to discover disease signatures underlying cough sound data.
|
Audio event classification is a well-formulated research task that has attracted significant attention in the community. Deep learning models have dominated state-of-the-art approaches to this task in recent years. These approaches follow a similar framework in which the raw audio data are converted to log-mel-spectrogram images. Hence, innovations in image classification models can also benefit the audio event classification tasks by proper transfer learning. Such transfer learning is enabled by fine-tuning the pre-trained image classification models on large-scale audio datasets like AudioSet [12].
|
Audio data pre-trained models close the gap between natural images and audio spectrogram images by pre-training deep neural networks on large-scale audio data. During pre-training, the models take audio spectrogram images as the input and hence can be directly applied to downstream tasks for respiratory sound classification. As the pre-trained models may have employed different parameters to generate the spectrogram, the same process for spectrogram generation as that used during pre-training must be employed during fine-tuning.
|
It is de facto standard to convert raw audio data into 2D spectrogram images for audio event classification using image classification deep models. The generation of spectrogram images is based on STFT and the choice of optimal hyper-parameters is coupled with the type of deep models employed for the classification.
|
D
|
To verify the above analyses, we plot in Fig. 2(a) the received SNR at the user versus the BS-IRS distance under the near-field LoS BS-IRS channel. The BS is equipped with a single-MA. The operating frequency is f=5𝑓5f=5italic_f = 5 GHz. The total number of the IRS reflecting elements is M=252𝑀superscript252M=25^{2}italic_M = 25 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. The transmit SNR is P/σ2=110𝑃superscript𝜎2110P/\sigma^{2}=110italic_P / italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 110 dB. In the FPA benchmark scheme, the antenna is fixed at (10). It is observed that the SNR performance by a single MA and a single FPA are identical for all BS-IRS distances considered, which validates our analyses.
|
To facilitate our performance analyses, we first consider that the BS is equipped with a single MA, i.e., N=1𝑁1N=1italic_N = 1, and characterize the maximum BS-user end-to-end channel power gain over different positions within the transmit region 𝒞tsubscript𝒞𝑡\mathcal{C}_{t}caligraphic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT under the optimal IRS passive beamforming. As a result, the APV reduces to a column vector, 𝒕∈ℝ3×1=[xt,yt,zt]T𝒕superscriptℝ31superscriptsubscript𝑥𝑡subscript𝑦𝑡subscript𝑧𝑡𝑇\boldsymbol{t}\in\mathbb{R}^{3\times 1}=[x_{t},y_{t},z_{t}]^{T}bold_italic_t ∈ blackboard_R start_POSTSUPERSCRIPT 3 × 1 end_POSTSUPERSCRIPT = [ italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT. Moreover, we consider that the IRS can achieve a LoS-dominant channel with the BS, which usually holds in practice if the IRS is deployed in the vicinity of the BS [20]. To capture the close BS-IRS distance in this case, we consider in this section a general near-field BS-IRS channel model, which is given by [21]
|
Different from the above case with an IRS, if we consider a near-field LoS channel from the single-MA BS to the user, the BS-user channel is given by
|
To verify the above analyses, we plot in Fig. 2(a) the received SNR at the user versus the BS-IRS distance under the near-field LoS BS-IRS channel. The BS is equipped with a single-MA. The operating frequency is f=5𝑓5f=5italic_f = 5 GHz. The total number of the IRS reflecting elements is M=252𝑀superscript252M=25^{2}italic_M = 25 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. The transmit SNR is P/σ2=110𝑃superscript𝜎2110P/\sigma^{2}=110italic_P / italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 110 dB. In the FPA benchmark scheme, the antenna is fixed at (10). It is observed that the SNR performance by a single MA and a single FPA are identical for all BS-IRS distances considered, which validates our analyses.
|
Based on the above, although the antenna position yielding the maximum channel power gain is fixed as (10), the channel power gain within 𝒞tsubscript𝒞𝑡\mathcal{C}_{t}caligraphic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT may vary. As such, it can be inferred that in the case of multiple antennas, the performance gain of MAs over FPAs may still exist, unlike the single-MA case. To verify this claim, we plot in Fig. 2(b) the received SNR at the user versus the BS-IRS distance with N=4𝑁4N=4italic_N = 4 MAs and other simulation parameters the same as Fig. 2(a). It is observed that different from the single-MA case, employing multiple MAs can still yield a performance gain over FPAs, if the BS-IRS distance is small. Moreover, the performance gain is observed to decrease with the BS-IRS distance, which is consistent with our previous analyses.
|
B
|
The Level 3 has highest risk of operation due to high risk tolerance of 1.0. Line L6subscript𝐿6L_{6}italic_L start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT is the most dangerous line with a risk of operation 3.74. The 0 risk of operation in line L5subscript𝐿5L_{5}italic_L start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT indicates that it has no threat for operation. Lines L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, L3subscript𝐿3L_{3}italic_L start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT, and L5subscript𝐿5L_{5}italic_L start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT are crucial for the network because the operator can rely on these lines to serve the demand.
|
The conductor clashing score for energized power lines with conservative fire risk intake under different risk tolerance levels for 24-hour period is shown in Table VI. In Level 1, all lines have 0 risk of operation because the risk tolerance is 0. In Level 2, the risk of operation increases as the risk tolerance is increased from 0 to 0.5.
|
In this section, the conservative and cumulative fire risk intakes for the network operation are assessed. The conservative case considers sum of fire ignition scores for all lines during 1-hour period and it leads to less risk, while the cumulative case considers sum of fire ignition score for all lines during 24-hour period and it leads to high risk. In each case, different risk tolerance levels are considered to find the appropriate operation scenario.
|
TABLE VI: Quantification of Risk of Operation With Conservative Fire Risk Intake For 24-Hour Period
|
The quantification of the risk of operation with cumulative fire risk intake for 24-hour period is shown in Table VII. The magenta color highlighted values show the change from Table VI. In Level 1, the objective is increased to $1.301M as compared to conservative fire risk intake.
|
C
|
Linear models. Linear model was originally used as a regression tool in statistics that predict outcomes based on linear relationship between independent variables and dependent variables. Its application on machine learning and pattern recognition were discussed by Bishop (2006) in a broader context. Their simplicity and interpretability made them valuable for many tasks, though they can struggle with capturing complex, non-linear relationships. In a research conducted by Zeng et al. (2022), linear models were improved into NLinear and DLinear that are capable to compete with many transformer based models in time-series forecasting. Another linear-based model for long term time series forecasting is RLinear which was introduced back in 2023 (Li et al., 2023). RLinear utilized Reversible Instance Normalization (RevIN) and Channel Independent (CI) strategy to improve overall forecasting performance. RevIN operates by normalizing each instance of the data independently which can lead to improved convergence rate and reduced overfitting (Kim et al., 2021). Meanwhile CI is a strategy used in multivariate time-series forecasting that normalizes the data separately for each feature (Han et al., 2024).
|
Baseline we choose other well-known neural network models such as Recurrent Neural Network (RNN), Long Short Term Memory (LSTM), and Bidirectional LSTM (BiLSTM). RNN were developed in 1986 by utilizing Backpropagation Through Time (BPTT) to recognize patterns in sequences of data. RNN were further upgraded to LSTM as an algorithm that can recognize patterns in long sequences of data while maintaining useful information (Sherstinsky, 2020). To enable LSTM to use future contexts of the data, Bidirectional LSTM were developed (Graves et al., 2005). We will also be utilizing VMD together with mentioned neural network models.
|
Given VMD’s capability to diminish data volatility and good performance of linear-based models, there exists potential for more compelling time-series data forecasting indeed with less stable data. These models will be compared against other deep learning models such as RNN, Long Short Term Memory (LSTM), and Bidirectional LSTM (BiLSTM), which are less difficult however broadly utilized in time-series forecasting researches.
|
Neural networks. Neural network is a more robust algorithm in machine learning that was inspired by the human nervous system, consist of layers of interconnected nodes (neurons) and are designed to model complex, non-linear relationships. These models are trained using backpropagation to adjust weights and improve prediction results (Aggarwal, 2018). They can range from simple feedforward networks to deep architectures with many layers such as Recurrent Neural Network (RNN). However due to its limitations on long term forecasting, Long Short Term Memory (LSTM) was introduced as an improvement from RNN. LSTM still used RNN architecture but it was designed to handle long-term dependencies in data (Sherstinsky, 2020).
|
In this section, we present experiments using LTSF-Linear models and few neural network models such as Recurrent Neural Network (RNN), Long Short Term Memory (LSTM), and Bidirectional LSTM (BLSTM) on 13 real-world datasets. In this research, we compare each model performance using RMSE values.
|
C
|
Fig. 1: Illustration of the strategies employed by the top-4 ranked system submissions for the CtrSVDD track. An asterisk (*) indicates the additional use of adversarial training strategies for AASIST. A dagger (†) denotes different layer aggregation strategies proposed for WavLM, as opposed to the weighted sum method.
|
The CtrSVDD track of the SVDD challenge was a notable success, attracting 47 submissions, with 37 surpassing the baseline performance. The top teams employed diverse and advanced techniques, such as self-supervised learning, ensemble learning, and adversarial training, demonstrating significant innovation in the field. Detailed system descriptions from eight teams provided valuable insights for future research. This success highlights the progress in deepfake detection for singing voices and sets the stage for further advancements and improvements.
|
Among all submissions, 8 teams submitted system descriptions, with their ranks bolded in Table 4. Based on the submitted strategies, most teams utilized self-supervised learning (SSL) frontends and ensemble learning. For features, both raw waveform and SSL features were extensively explored. The most popular SSL feature used is wav2vec2 XLSR [21], a cross-lingual representation. Popular backend choices included ResNet and AASIST [20], while score averaging was the favored ensemble method.
|
The team “Qishan” developed two subsystems with different SSL features. Each subsystem follows a Sensitive Layer Select (SLS) classifier that uses an adaptive weight allocation method [33] to aggregate SSL features and pool the feature map to a score. The score with a larger absolute value is selected for submission.
|
Table 4: Summary of the CtrSVDD challenge results. The EER without ACESinger is used as the evaluation metric to rank the submissions, while the EER for all attacks is listed for analysis. The rows for both baseline systems are shaded. Teams with bolded ranks submitted the system description.
|
B
|
For performance evaluation, we consider the pixel-wise peak signal-to-noise ratio (PSNR). In addition, the multi-scale structural similarity index (MS-SSIM) and the perceptual metric, learned perceptual image patch similarity (LPIPS) [34] are also included in the Appendix, which accounts for the nuances of human perception. We also use the BD-rate metric [35] to compute the average bit rate saving over all PSNRs.
|
In this work, we propose a spatial grouping strategy to reduce the transmission overhead to inform the receiver of the vector length for rate matching. To show the effectiveness, we report the performance on CBR saving. Particularly, we compare the CBRs for transmitting the channel symbols 𝐬𝐬\mathbf{s}bold_s and the vector length information k^^𝑘\hat{k}over^ start_ARG italic_k end_ARG in Table I. All the models are optimized on ImageNet dataset. From the results, the overall CBR can be significantly reduced by the spatial merging strategy, at the cost of a slight performance degradation.
|
We quantify the performance by considering the following datasets of different resolutions with necessary preprocessing. Kodak [36]: The dataset consists of 24242424 images of resolution 512×768512768512\times 768512 × 768 or 768×512768512768\times 512768 × 512.
|
Figure 6: The end-to-end distortion performance versus the CBR over different datasets. The results are evaluated on (a) Kodak and (b) CLIC2022 datasets, at SNR =10absent10=10= 10 dB.
|
Fig. 7 demonstrates the transmission performance across varying channel SNR levels. To ensure fairness, the CBR for these schemes is constrained to 0.06250.06250.06250.0625. For DeepJSCC and NTSCC, the training SNR equals the testing SNR to achieve optimal performance. For the separation-based methods, we test these schemes across different channel coding rates and modulation orders to determine the optimal settings111Given a bpp value, the CBR ρ𝜌\rhoitalic_ρ can be calculated as ρ=KC×H×W=bppC×log2M×Rc=bppC×log2M×Rc𝜌𝐾𝐶𝐻𝑊bpp𝐶subscript2𝑀subscript𝑅𝑐bpp𝐶subscript2𝑀subscript𝑅𝑐\rho=\frac{K}{C\times H\times W}=\frac{\text{bpp}}{C\times\log_{2}M\times R_{c%
|
B
|
Key contributions include the proposal of a distributed navigation policy trained in stochastic environments, the utilization of LSTM to mitigate information loss in POMDP, the development of a multi-agent training platform to facilitate idea validation and safe RL, and the introduction of a new reward function to address suboptimal action selection. The policy’s performance is evaluated in simulation and real-world settings, highlighting its effectiveness and potential for future enhancements.
|
To evaluate the ability of the policy, we test it in both simulation and the real world compared with NH-ORCA (Alonso-Mora et al. (2010)) and the policy of Fan et al. (2020). In the following section, we refer to the policy of Fan et al. (2020) by CNN policy, and the measurement dimension of the LiDAR Ln=512subscript𝐿𝑛512L_{n}=512italic_L start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT = 512. The LSTM+Attention policy is the proposed policy. We conducted an ablation experiment by removing the attention mechanism from the proposed policy, which is LSTM policy. Additionally, we replaced the LSTM layer with a Linear layer and get the Linear policy, in order to validate the effectiveness of the LSTM layer. In the simulation tests, we use some metrics to evaluate the policy. The metrics consist of the success rate, the collision rate, the trap rate, and the average step. The success rate means the agent’s success rate navigates to the goal in time without collision. The collision rate means the rate at which the agent collides with other objects. The trap rate means the rate at which agent cannot reach the goal in time and without collision. The average step is the average simulation step of the agent to reach the goal successfully. We use the simulation step rather than the cost time of navigation because the cost time depends on the calculation speed of the simulator, which cannot exactly reflect the cost time of navigation. Finally, all simulation tests are performed 1000 times.
|
In the future, we will explore extending the policy to more complex environments, incorporating passing through traffic, or expanding the work to Unmanned aerial vehicles (UAVs). We will further refine the reward function and the policy to enhance performance in edge cases, like U-shaped obstacles.
|
Generally, the reward function of the navigation problem is similar to the potential field function. Chen et al. (2017b) proposes a dense reward function that can guide reinforcement learning concisely and effectively. However, the reward function will cause the agent to learn suboptimal actions, which we will discuss in Section 3.2. Some approaches define the reward function through other rules. Chen et al. (2017a) develops a time-efficient navigation policy that respects public social norms by inducing the right-handed rules in the reward function. Xie and Dames (2023) introduces a velocity obstacle term into the reward function, enabling mobile robots to navigate autonomously in spaces filled with static obstacles and dense pedestrian traffic.
|
In this section, we will introduce the architecture of the policy network, the simulation platform, and the training process with training scenarios. We utilize the hidden state of LSTM to compensate for the lack of information in the observations and propose an attention structure to improve the policy ability in the multi-agent scenario. Besides, we propose a multi-agent simulation platform to bridge the gap between simulation and the real world. Additionally, the multi-agent training process is based on Proximal Policy Optimization (PPO) (Schulman et al. (2017)), with modifications in trajectory collection. Finally, we use two scenarios for training: single-agent and multi-agent.
|
B
|
This study evaluates two distinct backbone models to implement the MSM: the U-net, a classic CNN-based model and SwinIR [27], a Transformer-based model. MSM implemented with these two backbone models are referred to as MSMUnet𝑀𝑆subscript𝑀𝑈𝑛𝑒𝑡MSM_{Unet}italic_M italic_S italic_M start_POSTSUBSCRIPT italic_U italic_n italic_e italic_t end_POSTSUBSCRIPT and MSMSwinIR𝑀𝑆subscript𝑀𝑆𝑤𝑖𝑛𝐼𝑅MSM_{SwinIR}italic_M italic_S italic_M start_POSTSUBSCRIPT italic_S italic_w italic_i italic_n italic_I italic_R end_POSTSUBSCRIPT, respectively.
|
This approach allows us to assess the efficacy of the MSM method across different backbone models, providing insight into how each model’s inherent strengths and weaknesses influence the MSM’s performance.
|
In Fig. 1 (right), the PSNR values of the corresponding noisy images and the differences between the noisy and denoised images are plotted. The results indicate that the discrepancy between the input image and the model’s prediction is related to the quality of the input image. Notably, when the model is trained on noise-free images (σ𝜎\sigmaitalic_σ=0), this relationship becomes monotonic. As a result, in this case the measurement measuring the discrepancy using the distance or similarity between the input image and the model’s prediction can serve as a metric, termed Model Specialization Metric (MSM), to assess the quality of the input image’s quality.
|
In this section, we: A) introduce the concept of Model Specialization, illustrating how deviations from training data impact model performance; B) develop the Model Specialization Metric (MSM), a deep learning-based NR-IQA metric designed to assess image quality without the reliance of any label for training; C) Evaluate two models to serve as the backbone for MSM, namely, U-net, a CNN-based model, and SwinIR, a Transformer-based model; D) outline the data preparation strategies, which include using images with predetermined noise levels and distortions, images from a generative model, and sodium MRI denoising; E) describe the evaluation metrics used to validate MSM, including Pearson Linear Correlation Coefficient (PLCC), Spearman Rank Correlation Coefficient (SRCC), and Cohen’s Kappa (κ𝜅\kappaitalic_κ) Coefficient.
|
Table 1 lists the best SRCC/PLCC results for each backbone model regarding different loss functions and distance measurements. The best averaged results are emphasized in bold. These results not only validate the effectiveness of the ground-truth-to-ground-truth training strategy across both U-net and SwinIR models but also present that the proposed MSM method can effectively assess the image quality with the simulated noises and distortions.
|
A
|
In this study, we evaluated the performance of the state-of-the-art MR image harmonization algorithm, HACA3, across different acquired resolutions.
|
In Experiments 5 and 6, the input T2w-FLAIR image is 3D acquired and the T1w and T2w images are 2D acquired with the same orientation (axial) and different orientations, respectively.
|
Our findings indicate the impact of orientation and resolution, the effect of 3D and 2D image combinations, and HACA3 limitations.
|
Figure 6: Experiment 3: PSNR and SSIM when the T1w image is 3D acquired and the T2w and T2w-FLAIR images are 2D acquired with the same orientation (axial).
|
In Experiments 3 and 4, the input T1w image is 3D acquired, while the T2w and T2w-FLAIR images are 2D acquired with the same orientation (axial) and different orientations, respectively.
|
B
|
TABLE II: Performance impact of expansion factor (EFsubscript𝐸𝐹E_{F}italic_E start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT)
|
AxLSTM scales better with increasing number of patches, with smaller patch sizes leading to better performance (Table III). We expect expansion factor Ef=3subscript𝐸𝑓3E_{f}=3italic_E start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT = 3 paired with a patch size of (4,8)48(4,8)( 4 , 8 ) to further improve performance.
|
Recurrent models offer several advantages over transformers: they scale linearly with respect to sequence length and they have lower runtime memory requirements since storing the entire key-value (KV) cache is not necessary. The search has led us to new approaches such as state space models (SSMs) [13, 14, 15, 16, 17], which are a family of sequence models that lie at the intersection of convolutional neural networks, RNNs and classical state spaces. Several variants of SSMs have since been proposed, showing competitive performance and scalability versus transformers in several domains, including long sequence modelling [16, 17], computer vision [18] as well as audio [19].
|
SSAST [7] represents the official SSAST released model, which was pretrained on AudioSet and LibriSpeech datasets with the masked prediction + reconstruction multitask objective. We can see that AxLSTM models consistently outperform their transformer based SSAST counterparts by a considerable margin, while having over 45% fewer parameters, with AxLSTM-Base configuration yielding a 30% relative improvement in aggregate performance (83.1±0.2 v/s 69.2±0.3). While Mamba based SSAM [19] as well as Masked Autoencoder based approaches [27] perform better, it is worth noting that the proposed AxLSTM models have about 35% and 50% fewer parameters, respectively. Overall, we can conclude that AxLSTMs performs very favourably compared to popular audio representations.
|
Patch Size: To investigate how AxLSTM models perform compared to SSAST models with changing number of input patches, we pretrain AxLSTM-Tiny models with 3 patch sizes: (4,8)48(4,8)( 4 , 8 ), (4,16)416(4,16)( 4 , 16 ), and (8,16)816(8,16)( 8 , 16 ).
|
D
|
To leverage and preserve the original capabilities of the language model, we propose a parallel generation paradigm in which the transformer simultaneously produces audio and text tokens. Subsequently, we observed a minimal impact of the audio modality on text capabilities and further introduced batch-based parallel generation, which significantly enhances the model’s reasoning ability during streaming audio output. As a poinerr, we opted not to sacrifice audio quality for a simpler and lower bitrate audio encoder, in order to reduce the complexity of audio inference in the model. However, to ensure audio quality, we selected SNAC (Siuzdak, 2024), a music-grade encoder features 8 layers of codebooks and processes hundreds of tokens per second. Innovatively, we applied text-instructed delayed parallel generation to address the issue of long SNAC codebook sequences. Experiments show that the audio output quality is on par with common TTS systems.
|
We introduce "Any Model Can Talk", an innovative approach that enhances performance without altering the architecture of large models by focusing on training and inference. Our method employs a three-phase training process for speech-to-text and text-to-speech adapters, including annealing and SFT. Our method involves minimal training and modification of the original model, aiming to provide a reference for incorporating interaction capabilities into other models.
|
We also propose a method that requires minimal training and modification of the original model, enabling other works to rapidly develop their own speech capabilities. We refer to this approach as "Any Model Can Talk", designed to achieve speech output using a limited amount of additional data. The approach extend speech capabilities through additional adapters and pre-trained models, fine-tuning with a small amount of synthesized data. This is combined with the aforementioned parallel modeling approach to enable streaming output in the new modality while preserving the original model’s reasoning capabilities.
|
Three-Stage Training. Our training methodology is divided into three distinct stages: (1) Modality Alignment. The goal of this stage is to enhance the text model’s ability to understand and generate speech. The core model of Mini-Omni is entirely frozen, with gradients allowed only in two adapters. During this stage, we use data from speech recognition and speech synthesis to train the model’s speech recognition and synthesis capabilities. (2) Adaption Training. Once the new modality is aligned with the text model’s input, the adapters are frozen. In this stage, we focus solely on training the model’s text capabilities when given audio inputs, as audio output is simply synthesized from text. The model is trained using data from speech recognition, spoken question answering, and text response tasks. (3) Multi-modal Finetuning. In the final stage, the entire model is fine-tuned using comprehensive data. At this point, all model weights are unfrozen and trained. Since the primary modality alignment tasks are handled during adapter training, the original model’s capabilities are maximally preserved.
|
In this work, we introduce Mini-Omni, the first multi-modal model with direct speech-to-speech capabilities. Building on previous approaches that use text-guided speech generation, we propose a parallel text and audio generation method that leverages minimal additional data and modules to rapidly transfer a language model’s text capabilities to the audio modality, supporting streaming output interactions with high model and data efficiency. We explore both text-instructed streaming parallel generation and batch parallel generation, which further enhance the model’s reasoning ability and efficiency. Our approach successfully addresses challenging real-time dialogue tasks using a model with only 0.5 billion parameters. We have developed the Any Model Can Talk method, based on a pre and post-adapter design, to facilitate rapid speech adaptation of other models with minimal additional training. Additionally, we have released the VoiceAssistant-400K dataset for fine-tuning speech output, designed to minimize the generation of code symbols and assist humans in a voice assistant-like manner. All our data, inference, and training codes will be progressively open-sourced at https://github.com/gpt-omni/mini-omni. We hope to provide guidance and support for other work focused on language model speech interaction.
|
B
|
In this context, with a focus on the complex-valued radial basis function (C-RBF) neural network [11], we propose an extension for deep learning and a novel parameter selection scheme. This scheme aims to initialize synaptic weights, biases, center vectors, and center variances in the complex domain. Notably, existing literature offers limited guidance on initialization techniques for multilayer RBF-based CVNNs. Despite this gap, our study compares the proposed approach against well-known methods such as random initialization [21], K𝐾Kitalic_K-means clustering [22], and constellation-based initialization [23]. To the best of our knowledge, this is the first work proposing the architecture, training algorithm, and parameter selection for a multi-layered C-RBF.
|
The complex-valued Gaussian neuron is a natural extension of the well-known Gaussian neuron for the complex domain [24]. Similarly to its real-valued version, the output of the C-RBF neuron is described as
|
In this context, with a focus on the complex-valued radial basis function (C-RBF) neural network [11], we propose an extension for deep learning and a novel parameter selection scheme. This scheme aims to initialize synaptic weights, biases, center vectors, and center variances in the complex domain. Notably, existing literature offers limited guidance on initialization techniques for multilayer RBF-based CVNNs. Despite this gap, our study compares the proposed approach against well-known methods such as random initialization [21], K𝐾Kitalic_K-means clustering [22], and constellation-based initialization [23]. To the best of our knowledge, this is the first work proposing the architecture, training algorithm, and parameter selection for a multi-layered C-RBF.
|
This paper presents an in-depth analysis of the initialization process in complex-valued radial basis function (C-RBF) neural networks. Our findings have elucidated the intricate dependencies involved in the initialization process. Specifically, the normalization of the input and output datasets depends on the number of inputs and outputs, respectively. Furthermore, synaptic weights are influenced by the number of neurons and outputs per layer, whereas center vectors are dependent on the number of inputs per layer. Therefore, the proposed approach is robust to changes in the neural network architecture, such as the number of inputs, outputs, hidden layers, and neurons. This innovation is particularly impactful for deploying these networks in real-world scenarios, which require robustness for a wide range of different configurations with no room for ad hoc adjustments. In a carefully designed simulation environment, conforming to 3GPP TS 38 standards, our proposed deep C-RBF parameter initialization technique exhibited superior convergence performance when compared to existing methods such as random initialization, K𝐾Kitalic_K-means, and constellation-based initialization. Notably, for deep C-RBF architectures, our method was the only one that achieved successful convergence, highlighting its unique efficacy and adaptability. The implications of these results are manifold. First, they introduce a robust and effective initialization method that can significantly improve the training and performance of C-RBF neural networks, particularly in challenging 5G MIMO systems. Secondly, they lay the foundation for future research, opening avenues for the exploration of adaptive initialization techniques and offering the potential for extending our framework to other neural network architectures. In future works, we plan to validate the robustness of our proposed approach through more exhaustive experiments. We also aim to explore the applicability of our initialization framework to other neural network architectures, thereby contributing to the broader advancement of neural network-based solutions in digital communications.
|
and 𝛄[n]∈ℂP𝛄delimited-[]𝑛superscriptℂ𝑃\boldsymbol{\upgamma}[n]\in\mathbb{C}^{P}bold_γ [ italic_n ] ∈ blackboard_C start_POSTSUPERSCRIPT italic_P end_POSTSUPERSCRIPT is the Gaussian center, σ[n]∈ℝ𝜎delimited-[]𝑛ℝ\sigma[n]\in\mathbb{R}italic_σ [ italic_n ] ∈ blackboard_R is the variance. Note that the bias b[n]∈ℂ𝑏delimited-[]𝑛ℂb[n]\in\mathbb{C}italic_b [ italic_n ] ∈ blackboard_C is a linear complex-valued synaptic weight like w[n]∈ℂ𝑤delimited-[]𝑛ℂw[n]\in\mathbb{C}italic_w [ italic_n ] ∈ blackboard_C, but considering the Gaussian output equals one. Unlike the RBF neuron, the C-RBF neuron Gaussian center, synaptic weight, and bias are complex-valued free parameters, which are essential to map a complex-valued input 𝐱[n]∈ℂP𝐱delimited-[]𝑛superscriptℂ𝑃\mathbf{x}[n]\in\mathbb{C}^{P}bold_x [ italic_n ] ∈ blackboard_C start_POSTSUPERSCRIPT italic_P end_POSTSUPERSCRIPT into a complex-valued output y[n]∈ℂ𝑦delimited-[]𝑛ℂy[n]\in\mathbb{C}italic_y [ italic_n ] ∈ blackboard_C. By (2), the complex-valued input is firstly mapped into a real-valued scalar via the Euclidean norm of the Gaussian kernel. As the variance is also a real-valued parameter, the Gaussian kernel output is consequently a real-valued scalar. Thus, the complex mapping to the output is only possible because of the synaptic weights and bias.
|
A
|
In [16], the interference mitigation problem using A-RIS in two spectrum sharing scenarios is investigated: under spectral coexistence of radar and communication systems, and spectrum sharing in device-to-device (D2D) communications. The results show that A-RIS significantly outperforms non-absorptive RIS for interference suppression scenarios. Similarly, the interference cancellation ability of A-RIS was exploited to support joint D2D and cellular communications in[17].
|
In [21], the maximum sum rate achieved by optimizing the RIS phase shifts subject to the user power constraints was studied for NOMA. The results show that the proposed approaches improve spectral efficiency through the use of different power levels. In [22], NOMA was implemented in a multi-cell scenario assisted by multiple RIS to minimize the transmit power in the uplink. The results demonstrate that inter-group interference cancellation in NOMA, with the help of multi-reflection RIS, achieves lower total transmit power.
|
In uplink NOMA, which is the focus of this paper, the BS can similarly employ SIC to remove stronger user signals before decoding a given signal of interest. However, as illustrated in Fig. 1, the SIC condition needs to include both the individual users’ transmit powers, as well as the channel gain of the individual users’ channels sharing the same NOMA resource. Thus, to effectively implement SIC, the transmit power levels of the uplink users are important. There is still substantially less work on uplink NOMA compared to the downlink case, and there is also a lack of work addressing the reliability of NOMA in the presence of active jamming. For either downlink or uplink NOMA, when the users and the BS only have one antenna, dealing with jammers is a challenge since neither the users (for the downlink) nor the BS (for the uplink) may have sufficient spatial degrees of freedom (DoFs) to cancel the jamming [4]. In such cases, the jamming can severely degrade the NOMA performance even when decoding is performed by or for the strongest user.
|
RIS technology has been applied in many different types of wireless communications scenarios, including NOMA. However, while the use of conventional phase-shift-only RIS has been proposed for NOMA applications with the goal of improving spectral efficiency, to the best of our knowledge there is no prior work reported on using A-RIS with NOMA, nor on using RIS to mitigate the impact of external interference (e.g. jamming) on NOMA performance. NOMA is a vulnerable technology due to the use of shared resources since the decoding performance depends on sufficient differences in the users’ received power levels, which in the uplink involves adapting the transmit power levels to the channel conditions. However, jamming can reduce the efficiency of NOMA due to the severe loss of signal-to-noise ratio (SNR) [18].
|
Relatively little work has been done on optimizing NOMA performance in the presence of a jammer. In [27] optimal user grouping for NOMA was proposed to overcome the impact of jamming and improve the sum rate. In [28], a mobile access point or a UAV was exploited together with joint power control to mitigate the effect of a jamming attack and increase the reliability of the communication. Furthermore, anti-jamming precoding was proposed in [29] to minimize the total transmit power in an uplink MIMO-NOMA system. Finally, in [30], transmit beamforming together with the use of an RIS and artificial noise was proposed to enhance the secrecy of a NOMA system.
|
C
|
An inherently asymptotically stable system can be identified as an unstable Koopman system if the lifting functions are chosen poorly. Additionally, in some cases, noise in the data can also result in the identification of an unstable Koopman system, even when the underlying system is asymptotically stable [Mamakoukas2020]. In order for the dynamic model to be as representative as possible, system identification methods should enforce asymptotic stability when required. In [Dahdah2021, Dahdah2022], the authors enforce asymptotic stability on the approximate Koopman system by formulating the Koopman approximation problem as a series of LMI and bilinear matrix inequality (BMI) constraints. To formulate the Koopman operator approximation problem as a convex optimization problem, the BMI constraints used to enforce asymptotic stability on the Koopman system are transformed into a set of LMI constraints in [Lortie2024] by using the method proposed in [Mabrok2023, Lortie2024, Hara2020, Hara2021]. In this paper, the Koopman operator approximation problem is formulated as a series of LMIs to enforce asymptotic stability leveraging the approach taken in [Dahdah2022, Lortie2024].
|
This paper presents a new approximate Koopman modeling method, based on TDMD, that 1) reduces the bias in the dynamics and input matrices and 2) enforces asymptotic stability on the approximate Koopman system. The goal of this method is to identify an asymptotically stable Koopman representation with reduced bias when using noisy data regardless of the choice of lifting functions.
|
Obtaining a Koopman representation of a real system with data-driven methods in the presence of noise is a difficult task, since a biased model can result. Moreover, regardless of the lifting functions used, it’s important that the approximate Koopman model be asymptotically stable when the underlying dynamical system is asymptotically stable. The method proposed by this paper, TDMD with inputs and an asymptotic stability constraint, identifies an asymptotically stable approximate Koopman system with reduced bias when noisy data is used in the identification process, regardless of the lifting functions chosen. Using a simulated dataset of a Duffing oscillator and an experimental dataset of a soft robot arm, the proposed method is shown to compute a Koopman matrix that is closer to the noiseless solution and predicts a trajectory with a lower error than the state-of-the-art methods above a certain threshold of noise.
|
TDMD [Hemati2017], inspired by total least-squares DMD [Golub1980, Markovsky2007], projects the snapshot matrices onto an augmented matrix to reduce the bias in the dynamics matrix. This method is different than classic least squares, since it minimizes the orthogonal distance between the linear fit and the data points, while least squares minimizes the vertical distance between the linear fit and the data points [Hemati2017]. This paper proposes a method to extend the application of TDMD to include both the dynamics and the input matrices associated with the approximate Koopman model. Including inputs to the TDMD framework allows for the consideration of a wider range of applications, such as regulated systems and engineering applications requiring inputs. Additionally, to ensure that the proposed method identifies an asymptotically stable Koopman system with reduced bias, this paper introduces a new formulation of the TDMD problem using LMI constraints to enforce asymptotic stability. In summary, the proposed method is a two-part method, which firstly projects the snapshot matrices with inputs onto an augmented matrix, and secondly computes the Koopman matrix with reduced bias by solving a convex optimization problem that imposes asymptotic stability on the identified system. The performance of the proposed method is compared to the state-of-the-art methods, forward-backward EDMD (fbEDMD) [Lortie2024] and EDMD [Williams2015], using a simulated dataset of a Duffing oscillator and an experimental dataset of a soft robot arm.
|
Identifying a real system from data involves measurements, and measurements are always corrupted by some amount of noise. When a dynamic model is fit using noisy data, the data-driven dynamic model can be biased [Dawson2016, Hemati2017]. The bias in the Koopman operator approximation is reflected in the dynamics and input matrices. The bias can impact the eigenvalues of the dynamics matrix, resulting in eigenvalues that are shifted towards the origin of the complex plan, resulting in higher than expected decay rates [Dawson2016]. Many papers in the literature provide methods to reduce the bias in the dynamics matrix, such as forward-backward DMD (fbDMD) [Dawson2016] and total DMD (TDMD) [Golub1980, Hemati2017]. Recent work [Lortie2024] has adapted fbDMD to also account for the bias in the input matrix, but there is no equivalent adaptation for the input matrix using TDMD. Extending TDMD to reduce the bias in the input matrix has the potential to outperform [Lortie2024], since the authors in [Dawson2016] show that TDMD reduces the bias more than fbDMD in the dynamics matrix. Additionally, when identifying an inherently asymptotically stable system, the resulting model can be unstable if the lifting functions are chosen poorly. If the underlying dynamics are asymptotically stable, then the data-driven model must be asymptotically stable as well, else the data-driven model is not at all representative of the true system and is not useful for tasks such as prediction. Some work from the literature provides constraints formulated as linear matrix inequality (LMI) in the Koopman operator approximation problem to enforce asymptotic stability in the Koopman least-squares problem [Dahdah2022] and in the Koopman fbEDMD problem [Lortie2024].
|
A
|
The time domain results are obtained by taking inverse fast Fourier transform of coherent subtraction result between cases with and without lymphedema phantom presents Δs21p=IFFT(S21p−S21o)Δsuperscriptsubscript𝑠21𝑝IFFTsuperscriptsubscript𝑆21𝑝superscriptsubscript𝑆21𝑜\Delta s_{21}^{p}=\text{IFFT}(S_{21}^{p}-S_{21}^{o})roman_Δ italic_s start_POSTSUBSCRIPT 21 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT = IFFT ( italic_S start_POSTSUBSCRIPT 21 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT - italic_S start_POSTSUBSCRIPT 21 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_o end_POSTSUPERSCRIPT ) as shown in Fig. 4.
|
Through the maximum use of automation, the measurement process requires no manual intervention for individual scan angles.
|
The collected data is returned to the main process upon completion of individual sub-processes at the end of the collection time set by the user.
|
MPADA has been developed to enable the maximum use of automation in S-parameter measurements and promote reliable and repeatable data collection.
|
The measurement would then be executed automatically following the set schedule without manual intervention.
|
A
|
It is presumed that the RIS is aware of the cascade link’s channel state information (CSI), enabling it to optimize the phase shifting coefficients of its elements to maximize the SNR at LT
|
As there may be obstructions impeding the direct link in BTTNs, we also assume that all links follow Rayleigh fading distribution. This received signal is then processed by LT’s demodulator circuit to extract the information sent by TT.
|
We also assume that TT and LT are equipped with a single antenna for simplicity. Therefore, the received signal at TT can be given as follows.
|
Finally, given that the received SNR at LT is the summation of two independent RVs as defined in (5), the mean and variance of γLsubscript𝛾L\gamma_{\mathrm{L}}italic_γ start_POSTSUBSCRIPT roman_L end_POSTSUBSCRIPT can be obtained as:
|
Fig. 1 illustrates the system model for a RIS-aided BTTN. In this setup, TT and LT function as semi-passive backscatter devices, powered by continuous wave carrier signals emitted by RF sources in an indoor setting. TT’s objective is to transmit its messages to LT, facilitated by an indoor RIS equipped with N reflective elements. In standard BTTN operations, TT alters its impedance, affecting the tag antenna’s reflection coefficient and thus encoding the information within the backscattered signal, which is subsequently captured by LT [11].
|
B
|
Note that, when numerical citations are used, the references were sorted into the same order they appear in the bibliography.
|
This sample file demonstrates a simple use of BibTeX via a \bibliography command referencing the aapmsamp.bib file.
|
width in twocolumn mode. It is supposed to set on the full width of the page, just as the caption does.
|
bibliography222Automatically placing footnotes into the bibliography requires using BibTeX to compile the bibliography..
|
Note that, when numerical citations are used, the references were sorted into the same order they appear in the bibliography.
|
A
|
Timely identification of breast cancer is crucial, as it significantly improves the likelihood of effective therapy and prolonged survival. Research has indicated that over 95 percent of women who are diagnosed with early-stage breast cancer have a survival rate of five years or more [9, 10]. This statistic emphasizes the life-saving capacity of early diagnosis and the significance of creating dependable and effective diagnostic techniques.
|
This automated methodology seeks to enhance the accuracy and effectiveness of breast cancer identification, potentially resulting in earlier detection and improved patient prognosis[6].
|
This project seeks to utilize advanced deep-learning models to create a CAD system, in response to the rising occurrence of breast cancer and the crucial significance of early detection. This research aims to determine the most effective methods for categorizing histopathology images of breast cancer by conducting a comparative examination of eight sophisticated models. The primary objective is to enhance the early identification and intervention process, consequently diminishing death rates and enhancing patient outcomes.
|
The conventional approaches for diagnosing breast cancer, which mainly depend on manual examination by pathologists, are not only time-consuming but also susceptible to human fallibility. Pathologists of different proficiency levels may generate uneven outcomes, potentially resulting in misdiagnoses. Therefore, there is an urgent want for automated, precise, and effective diagnostic methods to aid in the timely identification of breast cancer.
|
Nevertheless, traditional manual diagnosis is time-consuming and necessitates the proficiency of exceptionally trained pathologists, who may still be susceptible to diagnostic inaccuracies as a result of human constraints and variations in expertise.
|
C
|
We demonstrate the effect of speeding up the singe rolling shutter’s sampling rate from 1000 Hztimes1000hertz1000\text{\,}\mathrm{Hz}start_ARG 1000 end_ARG start_ARG times end_ARG start_ARG roman_Hz end_ARG to 4000 Hztimes4000hertz4000\text{\,}\mathrm{Hz}start_ARG 4000 end_ARG start_ARG times end_ARG start_ARG roman_Hz end_ARG. Reconstruction is stable across all offsets of the rolling shutter schedule.
|
We have the demonstrated the efficacy of our blocked differences algorithm (Algorithm 2) in reconstructing PSTEs from the rolling shutter readout of a camera, accurately recovering signals orders of magnitude faster than the native global shutter rate and up to the Nyquist limit of the rolling shutter sampling rate. Compared to alternative TV and ℓ1superscriptℓ1\ell^{1}roman_ℓ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT norm algorithms, our algorithm is both faster and offers superior reconstruction quality (Figure 5). Our theoretical results characterize how certain parameters of our imaging system - the rolling shutter rate, the time integration window, the power of exogenous noise - affect our algorithm’s reconstruction error (Section 4). These theoretical results were validated in simulation (Figure 6), and they inform how a physical rolling shutter system should be tuned to accommodate a signal of interest.
|
Using the PSTE of Figure 4, we compared the reconstruction quality of our blocked differences algorithm (B=50𝐵50B=50italic_B = 50) against two alternative compressed sensing algorithms: one using a TV regularizer and one using a standard ℓ1superscriptℓ1\ell^{1}roman_ℓ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT regularizer (see Appendix C for details). All three algorithms were optimized using FISTA, and they were allowed at most 10000 optimization steps with the same convergence threshold. The stepsizes were calibrated using similar calculations, and the regularization parameter was set to the same value λ=.1𝜆.1\lambda=.1italic_λ = .1 with slight adjustments for the TV algorithm (see Appendices A.2 and C). We compared reconstruction quality by looking at the center pixel where the PSTE is spatially localized, shown in Figure 5. Notably, the reconstructions of the two alternative algorithms suffer from a periodic dropout artifact, which is not noticeably present in the reconstruction of our blocked differences algorithm. This is an artifact of the rolling shutter sampler, and we discuss this in more detail in Section 6. In terms of speed, our differences algorithm (31.1 stimes31.1second31.1\text{\,}\mathrm{s}start_ARG 31.1 end_ARG start_ARG times end_ARG start_ARG roman_s end_ARG) was significantly faster than both the ℓ1superscriptℓ1\ell^{1}roman_ℓ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT algorithm (64.4 stimes64.4second64.4\text{\,}\mathrm{s}start_ARG 64.4 end_ARG start_ARG times end_ARG start_ARG roman_s end_ARG) and the TV algorithm (190.8 stimes190.8second190.8\text{\,}\mathrm{s}start_ARG 190.8 end_ARG start_ARG times end_ARG start_ARG roman_s end_ARG). In summary, our algorithm was both faster and gave significantly higher quality reconstructions than the two alternative algorithms in our rolling shutter system.
|
The above result allows us to characterize how different properties of our imaging system and signal affect the reconstruction error, and we validate these predictions in Section 5.2. Increasing the number of lines sampled while fixing the sampling frequency will increase the maximum support size k𝑘kitalic_k by the discussion in Section 4.1. This decreases the near-sparsity term σk(x∗(0))+σk(∇tx∗)subscript𝜎𝑘superscript𝑥absent0subscript𝜎𝑘subscript∇𝑡superscript𝑥\sigma_{k}(x^{*(0)})+\sigma_{k}(\nabla_{t}x^{*})italic_σ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_x start_POSTSUPERSCRIPT ∗ ( 0 ) end_POSTSUPERSCRIPT ) + italic_σ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( ∇ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT italic_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ) and hence gives a smaller reconstruction error. From our discussion in Section 4.1 and the results of Section 5.2, this amounts to choosing an appropriate integration window of the rolling shutter readout so the number of lines per sample is tuned to the size of the signal’s spatial support. Similarly, increasing the sampling frequency or equivalently slowing the signal will shrink the overall magnitude of the time gradient and also give a smaller reconstruction error. Finally, the result predicts that the average reconstruction error should scale linearly with the root power of exogenous noise.
|
We ran our blocked differences algorithm (B=50𝐵50B=50italic_B = 50) on this signal and looked at reconstruction error as function of pulse frequency. For each pulse, we computed the average frame-wise error over its temporal support and normalized by its power, as the pulses have varying power. The results of this simulation are shown in Figure 8. Even at frequencies up to the Nyquist limit of 500 Hztimes500hertz500\text{\,}\mathrm{Hz}start_ARG 500 end_ARG start_ARG times end_ARG start_ARG roman_Hz end_ARG, we see our differences algorithm can generate high quality reconstructions from our rolling shutter readout. However, there is also a significant periodic trend in the error. This is an artifact due to spatial coverage and the rolling shutter, and we discuss this in more detail as well as potential fixes in Section 6. While this artifact presents a significant challenge, these preliminary simulation results are encouraging and show that our differences algorithm is capable of reconstruction up to the Nyquist limit in a rolling shutter system.
|
A
|
The results suggest that integral control is necessary for tracking moving targets due to the positive steady-state error observed in the azimuth with the lead controller in Figure 5(d). The gun turret model with lead control corroborates this outcome as the transfer functions in (10) and (11) have one free integrator (one factor of 1/s1𝑠1/s1 / italic_s). For this type of plant, it can be shown using the Final Value Theorem [13] that the steady-state errors for the feedback system with a lead controller are non-zero for ramp inputs. Using a PI+lead controller adds an integrator to the closed-loop system, which results in a steady-state error of 0 miltimes0mil0\text{\,}\mathrm{m}\mathrm{i}\mathrm{l}start_ARG 0 end_ARG start_ARG times end_ARG start_ARG roman_mil end_ARG.
|
In this paper, we analyze the aiming error distributions of a controlled gun turret system given target location as input through numerical simulations. We develop a mathematical model of the gun turret based on Newton’s laws. Based on the developed model for the physical system, we design two controllers, PID and MPC, to simulate controlled aiming at static targets. In numerical experiments, we study the aiming error distribution under several scenarios of controlled gun turret movement. First, we conduct a sensitivity analysis to analyze the impact of estimation errors in model parameters on the error distribution. We then perform an experiment that examines the effects of uncertainty in the aimpoint measurement on gun turret accuracy. Next, we analyze the dependency of aiming accuracy on when one chooses to fire at a target using the error data from the sensitivity analysis and the uncertainty experiment. Continuing with uncertainty analysis, we perform another experiment wherein we model the process of measuring the aimpoint and quantify the uncertainty added to the aiming error. In this experiment, we calculate the error distribution statistics analytically and compare the result to numerical estimates of the statistics from simulation data. In addition to a stationary target, we consider moving targets. We design two variants of PID controllers for aiming at moving targets and analyze the performance of the control system.
|
In this paper, we analyze aiming errors from a controlled gun turret system given an input target location. A linearized mathematical model of the gun turret is developed and used in controlled turret movement simulations against static and moving targets. We design two different controllers, a PID controller and an MPC controller, to assist in turret movement. The impacts of both errors in estimating the systems’ parameters and measurement noise on the aiming accuracy are statistically analyzed. The effects of measurement noise on the aiming errors are modeled and simulation statistics are compared with theoretical results. Preliminary results for tracking moving targets under PID control are presented.
|
Due to the limitations of human aiming ability, various control approaches have been developed to improve firing accuracy by using a feedback controller to assist in turret movement. For purposes of research, we assume a target location is provided and are focused on the error in moving the turret from an initial location to the location of least error. The more common methods for this objective are based on Proportional-Integral-Derivative (PID) control [4, 12, 15, 16, 20]. Other recent approaches are based on Adaptive Robust Control (ARC) [17, 28, 29], Sliding Mode Control (SMC) [21, 27], Model Predictive Control (MPC) [14], and methods that synthesize controllers using AI [1, 5]. The advantages of using feedback control for aiming over manual aiming are, but not limited to, a reduced sensitivity of the turret system to extraneous vibrations of the gun barrel and to the effects of the stress response that is naturally inherent in humans. While considerable progress has been made in control research for turret systems, the work done has centered only on demonstrating the viability of a particular control approach rather than statistically analyzing the accuracy of the weapon system under feedback control.
|
The state-space representation of the gun turret in equations (12) and (13) is used as the plant model in the controller design. At the start of controller design, the MVs, MOs, MDs and UDs are defined in the tool. The MVs are the control inputs u1subscript𝑢1u_{1}italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and u2subscript𝑢2u_{2}italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and the MOs are the gun turret outputs θ𝜃\thetaitalic_θ and α𝛼\alphaitalic_α. Since we ignore disturbances in this study, we do not define the MDs and UDs in the design. Additionally, we leave the MVs and MV increments unconstrained, which we recognize may not be practical; however, we are not comparing controller performance to determine the ‘best’ control approach for turret movement in this study. We are examining fundamental properties of controller errors, which are characteristics of the controlled turret system and do not depend on the input to the system.
|
B
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.