context
stringlengths
100
4.5k
A
stringlengths
100
3.31k
B
stringlengths
100
3.4k
C
stringlengths
100
4.85k
D
stringlengths
100
3.48k
label
stringclasses
4 values
{\prime}(x)}\left(h_{0}(x)\frac{f(x)}{f^{\prime}(x)}+h_{1}(x)\right)\right].roman_Δ italic_x = - divide start_ARG italic_f ( italic_x ) end_ARG start_ARG italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG / [ 1 + divide start_ARG 1 end_ARG start_ARG 2 italic_h start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_x ) end_ARG divide start_ARG italic_f ( italic_x ) end_ARG start_ARG italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG ( italic_h start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( italic_x ) divide start_ARG italic_f ( italic_x ) end_ARG start_ARG italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_ARG + italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_x ) ) ] .
from f/f′𝑓superscript𝑓′f/f^{\prime}italic_f / italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT [17, 33, 39], which means the update
(i) fast calculation of f′′/f′superscript𝑓′′superscript𝑓′f^{\prime\prime}/f^{\prime}italic_f start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT / italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT from f/f′𝑓superscript𝑓′f/f^{\prime}italic_f / italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT,
Structure relations [24] relate the ratio f/f′𝑓superscript𝑓′f/f^{\prime}italic_f / italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT
Installation of f/f′𝑓superscript𝑓′f/f^{\prime}italic_f / italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT in (1) progresses by dividing Rnm≅xm⁢Fsuperscriptsubscript𝑅𝑛𝑚superscript𝑥𝑚𝐹R_{n}^{m}\cong x^{m}Fitalic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ≅ italic_x start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT italic_F
C
In practice, the MSLP should be constructed in such a way that the ‘input’ of each of the subroutines (Algorithms 4–7) is stored in memory when the subroutine is called and the ‘output’ is kept in memory for the subsequent stage of Algorithm 3.
In practice, the MSLP should be constructed in such a way that the ‘input’ of each of the subroutines (Algorithms 4–7) is stored in memory when the subroutine is called and the ‘output’ is kept in memory for the subsequent stage of Algorithm 3.
There exists a b𝑏bitalic_b-MSLP, S𝑆Sitalic_S, of length at most λ𝜆\lambdaitalic_λ such that if S𝑆Sitalic_S is evaluated with memory containing the input of Algorithm 4 then S𝑆Sitalic_S returns memory containing the output of Algorithm 4.
There exists a b𝑏bitalic_b-MSLP, S𝑆Sitalic_S, of length at most λ𝜆\lambdaitalic_λ such that if S𝑆Sitalic_S is evaluated with memory containing the input of Algorithm 5 then S𝑆Sitalic_S returns memory containing the output of Algorithm 5.
The cost of the subroutines is determined with this in mind; that is, for each subroutine we determine the maximum length and memory requirement for an MSLP that returns the required output when evaluated with an initial memory containing the appropriate input.
D
It is hard to approximate such problem in its full generality using numerical methods, in particular because of the low regularity of the solution and its multiscale behavior. Most convergent proofs either assume extra regularity or special properties of the coefficients [AHPV, MR3050916, MR2306414, MR1286212, babuos85, MR1979846, MR2058933, HMV, MR1642758, MR3584539, MR2030161, MR2383203, vs1, vs2, MR2740478]. Some methods work even considering that the solution has low regularity [MR2801210, MR2753343, MR3225627, MR3177856, MR2861254]
Of course, the numerical scheme and the estimates developed in Section 3.1 hold. However, several simplifications are possible when the coefficients have low-contrast, leading to sharper estimates. We remark that in this case, our method is similar to that of [MR3591945], with some differences. First we consider that T~~𝑇\tilde{T}over~ start_ARG italic_T end_ARG can be nonzero. Also, our scheme is defined by a sequence of elliptic problems, avoiding the annoyance of saddle point systems. We had to reconsider the proofs, in our view simplifying some of them.
As in many multiscale methods previously considered, our starting point is the decomposition of the solution space into fine and coarse spaces that are adapted to the problem of interest. The exact definition of some basis functions requires solving global problems, but, based on decaying properties, only local computations are required, although these are not restricted to a single element. It is interesting to notice that, although the formulation is based on hybridization, the final numerical solution is defined by a sequence of elliptic problems.
mixed finite elements. We note the proposal in [CHUNG2018298] of generalized multiscale finite element methods based on eigenvalue problems inside the macro elements, with basis functions with support weakly dependent of the log of the contrast. Here, we propose eigenvalue problems based on edges of macro element removing the dependence
The remainder of the this paper is organized as follows. Section 2 describes a suitable primal hybrid formulation for the problem (1), which is followed in Section 3 by its a discrete formulation. A discrete space decomposition is introduced to transform the discrete saddle-point problem into a sequence of elliptic discrete problems. The analysis of the exponential decay of the multiscale basis function is considered in Section 3.2. To overcome the possible deterioration of the exponential decay for high-contrast coefficients, in Section 3.1 the Localized Spectral Decomposition (LSD) method is designed and fully analyzed. To allow an efficient pre-processing numerical scheme, Section LABEL:ss:findim discusses how to reduce the right-hand side space dimension without losing a target accuracy, and also develops L2⁢(Ω)superscript𝐿2ΩL^{2}(\Omega)italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( roman_Ω ) a priori error estimates. Section LABEL:s:Algorithms gives a global overview of the LSD algorithm proposed. Appendix LABEL:s:Auxiliaryresults provides some mathematical tools and Appendix LABEL:s:Notations refers to a notation library for the paper.
B
In particular, two of them (called legs) have their midpoints touched by P𝑃Pitalic_P, whereas the remaining one is called the base.
Moreover, one of the following holds: (1) The base is flushed with (i.e. contains an edge of) P𝑃Pitalic_P.
(2) One of the legs is flushed with an edge of P𝑃Pitalic_P and has as its midpoint a vertex of this edge.
The difference is mainly due to the degenerate case (where a chord of P𝑃Pitalic_P is parallel to an edge of P𝑃Pitalic_P) and the float issue of both programs.
In particular, two of them (called legs) have their midpoints touched by P𝑃Pitalic_P, whereas the remaining one is called the base.
A
Table 5: Importance ranking of CreditScore, CrowdWisdom and PolarityScores over time; 0 indicates the best rank.
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet level and let each tweet vote for the credibility of its event. We show the CreditScore measured over time in Figure 5(a). It can be seen that although the credibility of some tweets are low (rumor-related), averaging still makes the CreditScore of Munich shooting higher than the average of news events (hence, close to a news). In addition, we show the feature analysis for ContainNews (percentage of URLs containing news websites) for the event Munich shooting in Figure 5(b). We can see the curve of Munich shooting event is also close to the curve of average news, indicating the event is more news-related.
In this work, we propose an effective cascaded rumor detection approach using deep neural networks at tweet level in the first stage and wisdom of the “machines”, together with a variety of other features in the second stage, in order to enhance rumor detection performance in the early phase of an event. The proposed approach outperforms state of the
. We showcase here a study of the Munich shooting. We first show the event timeline at an early stage. Next we discuss some examples of misclassifications by our “weak” classifier and show some analysis on the strength of some highlighted features. The rough event timeline looks as follows.
the idea on focusing on early rumor signals on text contents; which is the most reliable source before the rumors widely spread. Specifically, we learn a more complex representation of single tweets using Convolutional Neural Networks, that could capture more hidden meaningful signal than only enquiries to debunk rumors.  [7, 19] also use RNN for rumor debunking. However, in their work, RNN is used at event-level. The classification leverages only the deep data representations of aggregated tweet contents of the whole event, while ignoring exploiting other –in latter stage–effective features such as user-based features and propagation features. Although, tweet contents are merely the only reliable source of clue at early stage, they are also likely to have doubtful perspectives and different stands in this specific moment. In addition, they could relate to rumorous sub-events (see e.g., the Munich shooting). Aggregating all relevant tweets of the event at this point can be of noisy and harm the classification performance. One could think of a sub-event detection mechanism as a solution, however, detecting sub-events at real-time over Twitter stream is a challenging task [22], which increases latency and complexity. In this work, we address this issue by deep neural modeling only at single tweet level. Our intuition is to leverage the “wisdom of the crowd” theory; such that even a certain portion of tweets at a moment (mostly early stage) are weakly predicted (because of these noisy factors), the ensemble of them would attribute to a stronger prediction.
C
We define ψ⁢(t)=z⁢(t)+h⁢(t)𝜓𝑡𝑧𝑡ℎ𝑡\psi\left(t\right)=z\left(t\right)+h\left(t\right)italic_ψ ( italic_t ) = italic_z ( italic_t ) + italic_h ( italic_t ), and
ϕ2⁢(t+1)≤z⁢(t)+h⁢(t)⁢ϕ⁢(t)+ϕ2⁢(t)superscriptitalic-ϕ2𝑡1𝑧𝑡ℎ𝑡italic-ϕ𝑡superscriptitalic-ϕ2𝑡\phi^{2}\left(t+1\right)\leq z\left(t\right)+h\left(t\right)\phi\left(t\right)%
\right)\right]+\phi^{2}\left(t\right)≤ italic_z ( italic_t ) + italic_h ( italic_t ) roman_max [ 1 , italic_ϕ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_t ) ] + italic_ϕ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_t )
ϕ2⁢(t+1)superscriptitalic-ϕ2𝑡1\displaystyle\phi^{2}\left(t+1\right)italic_ϕ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_t + 1 )
+\phi^{2}\left(t\right)\,italic_ϕ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_t + 1 ) ≤ italic_z ( italic_t ) + italic_h ( italic_t ) italic_ϕ ( italic_t ) + italic_ϕ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_t )
C
At 17:52 CEST, a shooter opened fire in the vicinity of the Olympia shopping mall in Munich. 10 people, including the shooter, were killed and 36 others were injured.
At 17:52 CEST, a shooter opened fire in the vicinity of the Olympia shopping mall in Munich. 10 people, including the shooter, were killed and 36 others were injured.
At 18:22 CEST, the first tweet was posted. There might be some certain delay, as we retrieve only tweets in English and the very first tweets were probably in German. The tweet is ”Sadly, i think there’s something terrible happening in #Munich #Munchen. Another Active Shooter in a mall. #SMH”.
At 18:31 CEST, the first misclassified tweet is posted. It was a tweet with shock sentiment and swear words: ”there’s now a shooter in a Munich shopping centre.. What the f*** is going on in the world. Gone mad”. It is classified as rumor-related.
We observe that at certain points in time, the volume of rumor-related tweets (for sub-events) in the event stream surges. This can lead to false positives for techniques that model events as the aggregation of all tweet contents; that is undesired at critical moments. We trade-off this by debunking at single tweet level and let each tweet vote for the credibility of its event. We show the CreditScore measured over time in Figure 13(a). It can be seen that although the credibility of some tweets are low (rumor-related), averaging still makes the CreditScore of Munich shooting higher than the average of news events (hence, close to a news). In addition, we show the feature analysis for ContainNews (percentage of URLs containing news websites) for the event Munich shooting in Figure 13(b). We can see the curve of Munich shooting event is also close to the curve of average news, indicating the event is more news-related.
B
March 31t⁢hsuperscript31𝑡ℎ31^{th}31 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT, 2017, on a clean history browser.
frequency of the pre-event aspect stays high. We witness similar phenomenon with the same event in 2017 in the Google query logs. We therefore postulate that (1) long-term salience should provide good ranking results for the
RQ3. We demonstrate the results of single models and our ensemble model in Table 4. As also witnessed in RQ2, S⁢V⁢Ma⁢l⁢l𝑆𝑉subscript𝑀𝑎𝑙𝑙SVM_{all}italic_S italic_V italic_M start_POSTSUBSCRIPT italic_a italic_l italic_l end_POSTSUBSCRIPT, will all features, gives a rather stable performance for both NDCG and Recall, improved the baseline, yet not significantly. Our Ensemble model, that is learned to trade-off between salience and timeliness achieves the best results for all metrics, outperforms the baseline significantly. As the testing entity queries in this experiment are at all event times and with all event types, these improvements illustrate the robustness of our model. Overall, we witness the low performance of adapted QAC methods. One reason is as mentioned, QACs, even time-aware generally favor already salient queries as follows the rich-get-richer phenomenon, and are not ideal for entity queries that are event-related (where aspect relevance can change abruptly). Time-aware QACs for partially long prefixes like entities often encounter sparse traffic of query volumes, that also contributes to the low results.
While the precise methods employed by the search engine for its recommendations remain undisclosed, the subpar performance could potentially be attributed by the influence of aspect salience (in this case, query popularity) and the occurrence of the rich get richer phenomenon: the salience of an aspect is
We further investigate the identification of event time, that is learned on top of the event-type classification. For the gold labels, we gather from the studied times with regards to the event times that is previously mentioned. We compare the result of the cascaded model with non-cascaded logistic regression. The results are shown in Table 3-bottom, showing that our cascaded model, with features inherited from the performance of SVM in previous task, substantially improves the single model. However, the overall modest results show the difficulty of this multi-class classification task.
C
Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; Li et al., 2016].
Variational methods, stochastic mini-batches, and Monte Carlo techniques have been studied for uncertainty estimation of reward posteriors of these models [Blundell et al., 2015; Kingma et al., 2015; Osband et al., 2016; Li et al., 2016].
from Monte Carlo tree search [Bai et al., 2013] and hyperparameter tuning for complex optimization in science, engineering and machine learning problems [Kandasamy et al., 2018; Urteaga et al., 2023],
Riquelme et al. [2018] benchmarked some of these techniques, and reported that neural networks with approximate inference, even if successful for supervised learning, under-perform in the MAB setting.
for the successful performance of SMC methods for inference of linear dynamical states in practice [Urteaga et al., 2017; Urteaga and Djurić, 2016a, b].
C
For example, Patient 8 prefers to work out at 20:00 every day, and the level of working out is reduced on weekends.
Most of the glucose measurements after the meals, on the other hand, are logged after at least four hours for most of the patients.
For example, Patient 8 prefers to work out at 20:00 every day, and the level of working out is reduced on weekends.
Among all patients, patient 12 seems to enjoy working out the least and the period that she burns most calories are around noon.
For activities, we observe that certain patients have favorite time of “working out” during the day, and it does not change much across the days.
C
between predictions and targets. The best results are marked in bold and models are sorted in descending order of their cumulative rank across a subset of weakly correlated evaluation measures within each group.
Table 3: The number of trainable parameters for all deep learning models listed in Table 1 that are competing in the MIT300 saliency benchmark. Entries of prior work are sorted according to increasing network complexity and the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents pre-trained models with a VGG16 backbone.
Table 1: Quantitative results of our model for the MIT300 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone) from shallow networks and other machine learning methods. Entries between the second and the third line are models based on theoretical considerations and define a baseline rather than competitive performance. Arrows indicate whether the metrics assess similarity
Table 2: Quantitative results of our model for the CAT2000 test set in the context of prior work. The first line separates deep learning approaches with architectures pre-trained on image classification (the superscript ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT represents models with a VGG16 backbone) from shallow networks and other machine learning methods. Entries between the second and third lines are models based on theoretical considerations and define a baseline rather than competitive performance. Arrows indicate whether the metrics assess similarity
Table 7: A list of the four image categories from the CAT2000 validation set that showed the largest average improvement by the ASPP architecture based on the cumulative rank across a subset of weakly correlated evaluation measures. Arrows indicate whether the metrics assess similarity
C
We observe that the reduction from MinCutwidth to MinLoc from Section 4.1 combined with the reduction from MinLoc to MinPathwidth from Section 5.2 gives a reduction from MinCutwidth to MinPathwidth. Moreover, this reduction is approximation preserving; thus, it carries over approximations for MinPathwidth (e. g., [21, 30]) to MinCutwidth, and yields new results for MinCutwidth.
Pathwidth and cutwidth are classical graph parameters that play an important role for graph algorithms, independent from our application for computing the locality number. Therefore, it is the main purpose of this section to translate the reduction from MinCutwidth to MinPathwidth that takes MinLoc as an intermediate step into a direct reduction from MinCutwidth to MinPathwidth. Such a reduction is of course implicitly hidden in the reductions of Sections 4.1 and 5.2, but we believe that explaining the connection in a more explicit way will be helpful for researchers that are mainly interested in the graph parameters cutwidth and pathwidth.
We observe that the reduction from MinCutwidth to MinLoc from Section 4.1 combined with the reduction from MinLoc to MinPathwidth from Section 5.2 gives a reduction from MinCutwidth to MinPathwidth. Moreover, this reduction is approximation preserving; thus, it carries over approximations for MinPathwidth (e. g., [21, 30]) to MinCutwidth, and yields new results for MinCutwidth.
One of the main results of this section is a reduction from the problem of computing the locality number of a word α𝛼\alphaitalic_α to the probem of computing the pathwidth of a graph. This reduction, however, does not technically provide a reduction from the decision problem Loc to Pathwidth, since the constructed graph’s pathwidth ranges between loc⁡(α)loc𝛼\operatorname{\textsf{loc}}(\alpha)loc ( italic_α ) and 2⁢loc⁡(α)2loc𝛼2\operatorname{\textsf{loc}}(\alpha)2 loc ( italic_α ), and therefore the reduction cannot be used to solve MinLoc exactly. The main purpose of this reduction is to carry over approximation results from MinPathwidth to MinLoc (also recall that exact and fpt-algorithms for MinLoc are obtained in Section 4 via a reduction to MinCutwidth). Hence, in this section we are mainly concerned with approximation algorithms.
In the following, we obtain an approximation algorithm for the locality number by reducing it to the problem of computing the pathwidth of a graph. To this end, we first describe another way of how a word can be represented by a graph. Recall that the reduction to cutwidth from Section 4 also transforms words into graphs. The main difference is that the reduction from Section 4 turns every symbol from the alphabet into an individual vertex of the graph (thus, producing a graph with O⁡(|Σ|)OΣ\operatorname{O}(|\Sigma|)roman_O ( | roman_Σ | ) vertices), while the reduction to pathwidth will use a vertex per position of the word α𝛼\alphaitalic_α, i. e., |α|𝛼|\alpha|| italic_α | individual vertices. In the reduction from Section 4 the information of the actual occurrences of the symbols in the word is encoded by the edges (in particular, the length |α|𝛼|\alpha|| italic_α | is represented by the number of edges), while in the following reduction the alphabet is encoded by connecting the vertices that correspond to positions of the same symbol to cliques in the graph (in particular, the number of edges may range between |α|𝛼|\alpha|| italic_α | and |α|2superscript𝛼2|\alpha|^{2}| italic_α | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT). We proceed with a formal definition and an example.
A
Based on these features they then train an ensemble of regularized multi-layer perceptrons and a RF classifier to predict the pathological target class.
In[143] the authors created a semi-supervised learning method, in which a segmentation network for LV/RV and myocardium was trained from labeled and unlabeled data.
Isensee et al.[141] used an ensemble of a 2D and a 3D u-net for segmentation of the LV/RV cavity and the LV myocardium on each time instance of the cardiac cycle.
Patravali et al.[140] trained a model based on u-net using Dice combined with cross entropy as a metric for LV/RV and myocardium segmentation.
There are also cardiology applications that used CRFs with deep learning as a segmentation refinement step in fundus photography[171, 174], and in LV/RV[143].
A
To match the assumed prior and the approximate, we use the Kullback–Leibler divergence term as an additional loss term (Babaeizadeh et al., 2017a).
We noticed two major issues with the above model. First, the weight of the KL divergence loss term is game dependent, which is not practical if one wants to deal with a broad portfolio of Atari games. Second, this weight is usually a very small number in the range of [10−3,10−5]superscript103superscript105[10^{-3},10^{-5}][ 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT , 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT ] which means that the approximated posterior can diverge significantly from the assumed prior. This can result in previously unseen latent values at inference time that lead to poor predictions. We address these issues by utilizing a discrete latent variable similar to Kaiser & Bengio (2018).
Human players can learn to play Atari games in minutes (Tsividis et al., 2017). However, some of the best model-free reinforcement learning algorithms require tens or hundreds of millions of time steps – the equivalent of several weeks of training in real time. How is it that humans can learn these games so much faster? Perhaps part of the puzzle is that humans possess an intuitive understanding of the physical processes that are represented in the game: we know that planes can fly, balls can roll, and bullets can destroy aliens. We can therefore predict the outcomes of our actions. In this paper, we explore how learned video models can enable learning in the Atari Learning Environment (ALE) benchmark Bellemare et al. (2015); Machado et al. (2018) with a budget restricted to 100K time steps – roughly to two hours of a play time.
Figure 2: Architecture of the proposed stochastic model with discrete latent. The input to the model is four stacked frames (as well as the action selected by the agent) while the output is the next predicted frame and expected reward. Input pixels and action are embedded using fully connected layers, and there is per-pixel softmax (256256256256 colors) in the output. This model has two main components. First, the bottom part of the network which consists of a skip-connected convolutional encoder and decoder. To condition the output on the actions of the agent, the output of each layer in the decoder is multiplied with the (learned) embedded action. Second part of the model is a convolutional inference network which approximates the posterior given the next frame, similarly to Babaeizadeh et al. (2017a). At training time, the sampled latent values from the approximated posterior will be discretized into bits. To keep the model differentiable, the backpropagation bypasses the discretization following Kaiser & Bengio (2018). A third LSTM based network is trained to approximate each bit given the previous ones. At inference time, the latent bits are predicted auto-regressively using this network. The deterministic model has the same architecture as this figure but without the inference network.
A stochastic model can be used to deal with limited horizon of past observed frames as well as sprites occlusion and flickering which results to higher quality predictions. Inspired by Babaeizadeh et al. (2017a), we tried a variational autoencoder (Kingma & Welling, 2014) to model the stochasticity of the environment. In this model, an additional network receives the input frames as well as the future target frame as input and approximates the distribution of the posterior. At each timestep, a latent value ztsubscript𝑧𝑡z_{t}italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is sampled from this distribution and passed as input to the original predictive model. At test time, the latent values are sampled from an assumed prior
A
Using this definition we can also derive that most previous methods for EEG classification use non-trainable S2Is and that no previous study has compared trainable with non-trainable S2Is.
In this paper we have shown empirical evidence that 1D ‘base model’ variations and trainable S2Is (especially the one layer CNN) perform better than non-trainable S2Is.
Using this definition we can also derive that most previous methods for EEG classification use non-trainable S2Is and that no previous study has compared trainable with non-trainable S2Is.
In this paper we compare non-trainable and trainable S2Is combined with well known ‘base models’ neural network architectures along with the 1D and depth-wise variations of the latter.
B𝐵Bitalic_B includes the following bdsubscript𝑏𝑑b_{d}italic_b start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT along with their depth-wise variations and their equivalent 1D architectures for d=1𝑑1d=1italic_d = 1 (for a complete list refer to first two rows of Table. I):
C
Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The results highlight the superior energy efficiency of the proposed autonomous locomotion mode transition method for negotiating steps of different heights, in contrast to solely depending on the rolling locomotion mode. This underscores the efficiency of the proposed strategy in enabling energy-conscious step negotiation across various terrains and obstacles.
Similarly, when the robot encountered a step with a height of 3h (as shown in Fig. 12), the mode transition was activated when the energy consumption of the rear track negotiation in rolling mode surpassed the threshold value derived from the previously assessed energy results of the rear body climbing gait. The results highlight the superior energy efficiency of the proposed autonomous locomotion mode transition method for negotiating steps of different heights, in contrast to solely depending on the rolling locomotion mode. This underscores the efficiency of the proposed strategy in enabling energy-conscious step negotiation across various terrains and obstacles.
In this section, we explore the autonomous locomotion mode transition of the Cricket robot. We present our hierarchical control design, which is simulated in a hybrid environment comprising MATLAB and CoppeliaSim. This design facilitates the decision-making process when transitioning between the robot’s rolling and walking locomotion modes. Through energy consumption analyses during step negotiations of varied heights, we establish energy criterion thresholds that guide the robot’s transition from rolling to walking mode. Our simulation studies reveal that the Cricket robot can autonomously switch to the most suitable locomotion mode based on the height of the steps encountered.
The implementation of the energy criterion strategy has proven effective in facilitating autonomous locomotion mode transitions for the Cricket robot when negotiating steps of varying heights. Compared to step negotiation purely in rolling locomotion mode, the proposed strategy demonstrated significant enhancements in energy performance, particularly for taller steps. A significant feature of this method is the determination of transition criterion threshold values based on studies of alternative locomotion modes rather than relying on empirical settings. This contribution is crucial as it ensures a more systematic and objective approach to setting the thresholds for locomotion mode transitions.
The cornerstone of our transition criterion combines energy consumption data with the geometric heights of the steps encountered. These threshold values are determined in energy evaluations while the robot operates in the walking locomotion mode. To analyze the energy dynamics during step negotiation in this mode, we propose two climbing gaits to achieve appropriate locomotion behaviors [10]. A distinguishing feature of our approach is its independence from specific mechanical designs [14, 15], rendering it adaptable to a wide array of hybrid robots. Ultimately, our method marks a pivotal advancement in the realm of autonomous mode transitions during step negotiation, as it holistically integrates both internal and external determinants to finalize transition thresholds.
C
(1.5625,3.75)1.56253.75(1.5625,3.75)( 1.5625 , 3.75 )-competitive. Similarly, for α=0.868𝛼0.868\alpha=0.868italic_α = 0.868, we get a (1.5783,3.56)1.57833.56(1.5783,3.56)( 1.5783 , 3.56 )-algorithm, whose consistency is the same as best existing online algorithms. Therefore, our results are useful for values of α>0.868𝛼0.868\alpha>0.868italic_α > 0.868, where improvements over online algorithms without prediction are realized.
In contrast, for α<0.868𝛼0.868\alpha<0.868italic_α < 0.868, the best-known competitive algorithms without predictions dominate our proposed solution.
Lower bounds establish strict limitations on the power of any online algorithm; there are strong connections between randomized online algorithms and online algorithms with advice (see, e.g., [27]); online algorithms with advice can be of practical interest in settings in which it is feasible to run multiple algorithms and output the best solution (see [20] about obtaining improved data compression algorithms by means of list update algorithms with advice); and the first complexity classes for online computation have been based on advice complexity [10].
(1.5625,3.75)1.56253.75(1.5625,3.75)( 1.5625 , 3.75 )-competitive. Similarly, for α=0.868𝛼0.868\alpha=0.868italic_α = 0.868, we get a (1.5783,3.56)1.57833.56(1.5783,3.56)( 1.5783 , 3.56 )-algorithm, whose consistency is the same as best existing online algorithms. Therefore, our results are useful for values of α>0.868𝛼0.868\alpha>0.868italic_α > 0.868, where improvements over online algorithms without prediction are realized.
is 1.7. Many other algorithms with improved competitive ratios have been studied. The best known algorithm was introduced by Balogh et al. [6] and has a competitive ratio of at most 1.5783. Moreover, it is known that no online algorithm can achieve a competitive ratio better than 1.54278 [7].
A
Where s⁢g𝑠𝑔sgitalic_s italic_g and s⁢n𝑠𝑛snitalic_s italic_n are functions of the form f:W×C↦[0,1]:𝑓maps-to𝑊𝐶01f:W\times C\mapsto[0,1]italic_f : italic_W × italic_C ↦ [ 0 , 1 ].
As we will see, the former decreases l⁢v𝑙𝑣lvitalic_l italic_v in relation to the global significance of w𝑤witalic_w, and the latter sanctions it, in relation to the number of categories for which w𝑤witalic_w is significant.
Our approach to calculating g⁢v𝑔𝑣gvitalic_g italic_v, as we will see later, tries to overcome some problems arising from the valuation of words only based on local information to a category. This is carried out by, firstly, computing a word local value (l⁢v𝑙𝑣lvitalic_l italic_v) for every category, and secondly, combining them to obtain the global value of the word in relation to all the categories.
Where g⁢v⁢(w,c)=v𝑔𝑣𝑤𝑐𝑣gv(w,c)=vitalic_g italic_v ( italic_w , italic_c ) = italic_v is read as “w𝑤witalic_w has a global value of v𝑣vitalic_v in c𝑐citalic_c” or, alternatively, “the global value of w𝑤witalic_w in c𝑐citalic_c is v𝑣vitalic_v”.
Finally, we need to define s⁢n𝑠𝑛snitalic_s italic_n, the sanction function, which will proportionally decrease the global value of w𝑤witalic_w, in relation to the number of categories for which w𝑤witalic_w is significant. Hence s⁢n𝑠𝑛snitalic_s italic_n should be a function such that: (a) when w𝑤witalic_w is significant (i.e. s⁢gλ⁢(w,c)≈1𝑠subscript𝑔𝜆𝑤𝑐1sg_{\lambda}(w,c)\approx 1italic_s italic_g start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT ( italic_w , italic_c ) ≈ 1) to only one category c𝑐citalic_c, s⁢n⁢(w,c)𝑠𝑛𝑤𝑐sn(w,c)italic_s italic_n ( italic_w , italic_c ) should be equal to 1; (b) the greater the number of categories w𝑤witalic_w is significant to, the lower the value of s⁢n⁢(w,c)𝑠𝑛𝑤𝑐sn(w,c)italic_s italic_n ( italic_w , italic_c ). Therefore, we have defined s⁢n𝑠𝑛snitalic_s italic_n by:
A
The RCC of DMSGD is 100%percent100100\%100 % (no compression). Here, all numbers have the same unit (float value).
Table 1 shows the empirical results of different methods under IID data distribution. Figure 3 shows the training curves under IID data distribution. We can observe that each method achieves comparable RCC. As for test accuracy, GMC and DGC (w/ mfm) exhibit comparable performance and outperform the other three methods.
Table 2 and Figure 4 show the performance under non-IID data distribution. We can find that GMC can achieve much better test accuracy and faster convergence speed compared to other methods. Furthermore, we can find that the momentum factor masking trick will severely impair the performance of DGC under non-IID data distribution.
We adopt two popular deep models: ResNet20 (He et al., 2016) and Vision Transformer (ViT) (Lee et al., 2021) with four Transformer blocks. Although Batch Normalization (BN) in ResNet20 is effective in practice, it is known to be problematic in the non-IID setting due to its dependence on the estimated mean and variance (Hsieh et al., 2020). We replace BN in ResNet20 with Group Normalization (GN) (Wu and He, 2018) under non-IID data distribution as suggested in (Hsieh et al., 2020; Lin et al., 2021). We train the models with 200 epochs.
We use the CIFAR10 and CIFAR100 datasets under both IID and non-IID data distribution. For the IID scenario, the training data is randomly assigned to each worker. For the non-IID scenario, we use Dirichlet distribution with parameter 0.1 to partition the training data as in (Hsu et al., 2019; Lin et al., 2021).
D
A limitation of SANs is the use of varying amplitude-only kernels, which are not sufficient for more complex data and also do not fully utilize the compressibility of the data.
The Extrema-Pool indices activation function (defined at Algorithm 2) keeps only the index of the activation with the maximum absolute amplitude from each region outlined by a grid as granular as the kernel size m(i)superscript𝑚𝑖m^{(i)}italic_m start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT and zeros out the rest.
It is interesting to note that in some cases SANs reconstructions, such as for the Extrema-Pool indices, performed even better than the original data.
The majority of domains where machine learning is applied, including critical areas such as healthcare [26], require models to be interpretable and explainable before considering them as a solution.
A possible solution would be using a grid sampler [45] on the kernel allowing it to learn more general transformations (such as scale) than simple amplitude variability.
D
Considering several UAVs with UAV ad-hoc network game with potential function ϕ:S→R:italic-ϕ→𝑆𝑅\phi:S\rightarrow Ritalic_ϕ : italic_S → italic_R. When all UAVs adhere to SPBLLA, if m𝑚mitalic_m is large enough, the stochastically stable strategies are maximizers of the potential function, which are PSNEs.
Definition 3 indicates that the change of utility function is the same amount of the change of potential function, which gives an ideal property to the potential game.
Let each UAV alter strategy as large as possible to make utility function change significantly. Calculating the most significant difference that a utility function can make in an iteration, and we are capable of learning the range of m𝑚mitalic_m.
However, we have to recognize that the altering strategies probability ω𝜔\omegaitalic_ω severely impacts on the efficiency of SPBLLA. If Theorem 5 limits m𝑚mitalic_m to be a large value, the probability will decrease. When m𝑚mitalic_m is too large, UAVs are hard to move, and the learning rate will decrease. To some points, the learning rate of SPBLLA will lower than that of PBLLA. In our UAV ad-hoc network scenario, when τ=0.01𝜏0.01\tau=0.01italic_τ = 0.01 and m=0.03𝑚0.03m=0.03italic_m = 0.03, which is circled in Fig. 15, the probability of altering strategies ω<0.01𝜔0.01\omega<0.01italic_ω < 0.01. The probability of altering strategies in SPBLLA is less than that of PBLLA, and the SPBLLA will spend more learning time.
According to Appendix B, to make the SPBLLA converge, m𝑚mitalic_m should be twice larger than the most massive altering amount of each UAV’s utility function.
D
(a), 9⁢μ9μ9\upmu9 roman_μs (b), 18⁢μ18μ18\upmu18 roman_μs (c), 45⁢μ45μ45\upmu45 roman_μs (d), 65⁢μ65μ65\upmu65 roman_μs
at 0⁢μ0μ0\upmu0 roman_μs (a), 9⁢μ9μ9\upmu9 roman_μs (b), 45⁢μ45μ45\upmu45 roman_μs (c), 65⁢μ65μ65\upmu65 roman_μs (d)
(a), 9⁢μ9μ9\upmu9 roman_μs (b), 18⁢μ18μ18\upmu18 roman_μs (c), 45⁢μ45μ45\upmu45 roman_μs (d), 65⁢μ65μ65\upmu65 roman_μs
9⁢μ9μ9\upmu9 roman_μs (b), 18⁢μ18μ18\upmu18 roman_μs (c), 45⁢μ45μ45\upmu45 roman_μs (d), 65⁢μ65μ65\upmu65 roman_μs (e),
at 0⁢μ0μ0\upmu0 roman_μs (a), 9⁢μ9μ9\upmu9 roman_μs (b), 45⁢μ45μ45\upmu45 roman_μs (c), 65⁢μ65μ65\upmu65 roman_μs (d).
C
One of the motivation for our work draws from a collaboration with an industrial partner specialized in cold chain, refrigereation and conditioning (CEMAFROID).
Within this collaboration, a finer consideration of equality was a key notion to select relevant data in SQL in order to get better results.
When using the framework, one can further require reflexivity on the comparability functions, i.e. f⁢(xA,xA)=1A𝑓subscript𝑥𝐴subscript𝑥𝐴subscript1𝐴f(x_{A},x_{A})=1_{A}italic_f ( italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ) = 1 start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, in order to get a semantic of comparability closer to equality.
Interestingly, while the results we present in the body of this paper are unchanged by reflexivity, we show in C that reflexivity is a key property to ensure completeness of (extended) Armstrong axioms.
There is an ongoing work on SQL queries based on these principles within the context of a collaboration with the (CEMAFROID).
A
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and after applying Dropout (Dropout methods DQN). There was a statistically significant decrease in Variance (14.72% between Gaussian Dropout and DQN, 48.89% between Variational Dropout and DQN). Furthermore one of the Dropout methods outperformed DQN score.
The findings indicate that Dropout can effectively reduce the variance and overestimation issues in DQN, leading to more stable learning curves and notably enhanced performance.
Figure 5 demonstrates that using Dropout methods in DQN reduce the overestimation from the optimal policy. Despite that Gridworld environment is not suffering from intangible overestimation that can distort the overall cumulative rewards but reducing overestimation leads to more accurate predictions.
In this study, we proposed and experimentally analyzed the benefits of incorporating the Dropout technique into the DQN algorithm to stabilize training, enhance performance, and reduce variance. Our findings indicate that the Dropout-DQN method is effective in decreasing both variance and overestimation. However, our experiments were limited to simple problems and environments, utilizing small network architectures and only two Dropout methods.
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Reinforcement Learning is concerned with finding a sequence of actions an agent can follow that could lead to solve the task on the environment [1][2][3]. Most of Reinforcement Learning techniques estimate the consequences of actions in order to find an optimal policy in the form of sequence of actions that can be followed by the agent to solve the task. The process of choosing the optimal policy is based on selecting actions that maximize the future payoff of an action. Finding an optimal policy is the main concern of Reinforcement Learning for that reason many algorithms have been introduced over a course of time, e.g, Q-learning[4], SARSA[5], and policy gradient methods[6]. These methods use linear function approximation techniques to estimate action value, where convergence is guaranteed [7]. However, as challenges in modeling complex patterns increase, the need for expressive and flexible non-linear function approximators becomes clear. The recent advances in deep neural networks helped to develop artificial agent named deep Q-network(DQN)[8] that can learn successful policies directly from high-dimensional features. Despite the remarkable flexibility and the huge representative capability of DQN, some issues emerge from the combination of Q-learning and neural networks. One of these issues, known as ”overestimation phenomenon,” was first explored by [9]. They noted that the expansion of the action space in the Q-learning algorithm, along with generalization errors in neural networks, often results in an overestimation and increased variance of state-action values. They suggested that to counter these issues, further modifications and enhancements to the standard algorithm would be necessary to boost training stability and diminish overestimation. In response, [10] introduced Double-DQN, an improvement that incorporates the double Q-learning estimator [11], aiming to address the challenges of variance and overestimation. Additionally, [31] developed the Averaged-DQN algorithm, a significant improvement over the standard DQN. By averaging previously learned Q-values, Averaged-DQN effectively lowers the variance in target value estimates, thus enhancing training stability and overall performance.
A
Chartsias et al. (2017) used a conditional GAN to generate cardiac MR images from CT images. They showed that utilizing the synthetic data increased the segmentation accuracy and that using only the synthetic data led to only a marginal decrease in the segmentation accuracy. Similarly, Zhang et al. (2018c) proposed a GAN based volume-to-volume translation for generating MR volumes from corresponding CT volumes and vice versa. They showed that synthetic data improve segmentation performance on cardiovascular MRI volumes. Huo et al. (2018) proposed an end-to-end synthesis and segmentation network called EssNet to simultaneously synthesize CT images from unpaired MR images and to segment CT splenomegaly on unlabeled CT images and showed that their approach yielded better segmentation performance than even segmentation obtained using models trained using the manual CT labels. Abhishek and Hamarneh (2019) trained a conditional GAN to generate skin lesion images from and confined to binary masks, and showed that using the synthesized images led to a higher skin lesion segmentation accuracy. Zhang et al. (2018b) trained a GAN for translating between digitally reconstructed radiographs and X-ray images and achieved similar accuracy as supervised training in multi-organ segmentation. Shin et al. (2018) proposed a method to generate synthetic abnormal MRI images with brain tumors by training a GAN using two publicly available data sets of brain MRI. Similarly, other works (Han et al., 2019; Yang et al., 2018; Yu et al., 2018a) have leveraged GANs to synthesize brain MR images.
 Kervadec et al. (2019b) introduced a differentiable term in the loss function for datasets with weakly supervised labels, which reduced the computational demand for training while also achieving almost similar performance to full supervision for segmentation of cardiac images. Afshari et al. (2019) used a fully convolutional architecture along with a Mumford-Shah functional Mumford and Shah (1989) inspired loss function to segment lesions from PET scans using only bounding box annotations as supervision. Mirikharaji et al. (2019) proposed to learn spatially adaptive weight maps to account for spatial variations in pixel-level annotations and used noisy annotations to train a segmentation model for skin lesions. Taghanaki et al. (2019d) proposed to learn spatial masks using only image-level labels with minimizing mutual information between the input and masks, and at the same time maximizing the mutual information between the masks and image labels. Peng et al. (2019) proposed an approach to train a CNN with discrete constraints and regularization priors based on the alternating direction method of multipliers (ADMM). Perone and Cohen-Adad (2018) expanded the semi-supervised mean teacher (Tarvainen and Valpola, 2017) approach to segmentation tasks on MRI data, and show that it can bring important improvements in a realistic small data regime. In another work, Perone et al. (2019) extended the method of unsupervised domain adaptation using self-ensembling for the semantic segmentation task. They showed how this approach could improve the generalization of the models even when using a small amount of unlabeled data.
Collecting large-scale accurate pixel-level annotation is time-consuming and financially expensive. However, unlabeled and weakly-labeled images can be collected in large amounts in a relatively fast and cheap manner. As shown in Figure 2, varying levels of supervision are possible when training deep segmentation models, from pixel-wise annotations (supervised learning) and image-level and bounding box annotations (semi-supervised learning) to no annotations at all (unsupervised learning), the last two of which comprise weak supervision. Therefore, a promising direction for semantic image segmentation is to develop weakly supervised segmentation models.
Guo et al. (2018) provided a review of deep learning based semantic segmentation of images, and divided the literature into three categories: region-based, fully convolutional network (FCN)-based, and weakly supervised segmentation methods. Hu et al. (2018b) summarized the most commonly used RGB-D datasets for semantic segmentation as well as traditional machine learning based methods and deep learning-based network architectures for RGB-D segmentation. Lateef and Ruichek (2019) presented an extensive survey of deep learning architectures, datasets, and evaluation methods for the semantic segmentation of natural images using deep neural networks. Similarly, for medical imaging, Goceri and Goceri (2017) presented an high-level overview of deep learning-based medical image analysis techniques and application areas. Hesamian et al. (2019) presented an overview of the state-of-the-art methods in medical image segmentation using deep learning by covering the literature related to network structures and model training techniques. Karimi et al. (2019) reviewed the literature on techniques to handle label noise in deep learning based medical image analysis and evaluated existing approaches on three medical imaging datasets for segmentation and classification tasks. Zhou et al. (2019b) presented a review of techniques proposed for fusion of medical images from multiple modalities for medical image segmentation. Goceri (2019a) discussed the fully supervised, weakly supervised and transfer learning techniques for training deep neural networks for segmentation of medical images, and also discussed the existing methods for addressing the problems of lack of data and class imbalance. Zhang et al. (2019) presented a review of the approaches to address the problem of small sample sizes in medical image analysis, and divided the literature into five categories including explanation, weakly supervised, transfer learning, and active learning techniques. Tajbakhsh et al. (2020) presented a review of the literature for addressing the challenges of scarce annotations as well as weak annotations (e.g., noisy annotations, image-level labels, sparse annotations, etc.) in medical image segmentation. Similarly, there are several surveys covering the literature on the task of object detection (Wang et al., 2019c; Zou et al., 2019; Borji et al., 2019; Liu et al., 2019b; Zhao et al., 2019), which can also be used to obtain what can be termed as rough localizations of the object(s) of interest. In contrast to the existing surveys, we make the following contributions in this review:
The scarcity of richly annotated medical images is limiting supervised deep learning-based solutions to medical image analysis tasks (Perone and Cohen-Adad, 2019), such as localizing discriminatory radiomic disease signatures. Therefore, it is desirable to leverage unsupervised and weakly supervised models.
B
Importantly, when the solution of the spectral algorithm become worse than the random cut, the MAXCUT upper bound is close to 0.5.
In every example, when λmaxssubscriptsuperscript𝜆𝑠max\lambda^{s}_{\text{max}}italic_λ start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT becomes lower than 1−τ1𝜏1-\tau1 - italic_τ the solution of the spectral algorithm is still larger than the cut induced by the random partition.
Therefore, when the spectral cut is lower than 0.5 it is possible to return the random partition instead, which yields a nearly-optimal solution.
Therefore, when the spectral cut is lower than 0.5 it is possible to return the random partition instead, which yields a nearly-optimal solution.
In every example, when λmaxssubscriptsuperscript𝜆𝑠max\lambda^{s}_{\text{max}}italic_λ start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT max end_POSTSUBSCRIPT becomes lower than 1−τ1𝜏1-\tau1 - italic_τ the solution of the spectral algorithm is still larger than the cut induced by the random partition.
B
Welbl: Welbl (2014) and Biau et al. (2019) present a similar mapping with subsequent fine-tuning. The authors introduce two training modes: independent and joint. The first optimizes each small network individually, while the latter joins all mapped decision trees into one network. Additionally, the authors evaluate a network with sparse connections and regular fully connected networks (denoted as sparse and full).
Network splitting (Massiceti et al., 2017) slightly improves the number of parameters of the networks.
Massiceti: Massiceti et al. (2017) present a network splitting strategy to reduce the number of network parameters. The decision trees are divided into subtrees and mapped individually while sharing common split nodes. The optimal depth of the subtrees is determined by evaluating all possible values.
Massiceti et al. (2017) extend this approach and introduce a network splitting strategy by dividing each decision tree into multiple subtrees. The subtrees are mapped individually and share common neurons for evaluating the split decision.
Network splitting proposed by Massiceti et al. (2017) maps multiple subtrees while sharing common split nodes and reduces the average number of network parameters to 748 000748000748\,000748 000.
B
In a more practical setting, the agent sequentially explores the state space, and meanwhile, exploits the information at hand by taking the actions that lead to higher expected total rewards. Such an exploration-exploitation tradeoff is better captured by the aforementioned statistical question regarding the regret or sample complexity, which remains even more challenging to answer than the computational question. As a result, such a lack of statistical understanding hinders the development of more sample-efficient policy optimization algorithms beyond heuristics. In fact, empirically, vanilla policy gradient is known to exhibit a possibly worse sample complexity than random search (Mania et al., 2018), even in basic settings such as linear-quadratic regulators. Meanwhile, theoretically, vanilla policy gradient can be shown to suffer from exponentially large variance in the well-known “combination lock” setting (Kakade, 2003; Leffler et al., 2007; Azar et al., 2012a), which only has a finite state space.
We study the sample efficiency of policy-based reinforcement learning in the episodic setting of linear MDPs with full-information feedback. We proposed an optimistic variant of the proximal policy optimization algorithm, dubbed as OPPO, which incorporates the principle of “optimism in the face of uncertainty” into policy optimization. When applied to the episodic MDP with unknown transition and adversarial reward, OPPO provably achieves a d2⁢H3⁢Tsuperscript𝑑2superscript𝐻3𝑇\sqrt{d^{2}H^{3}T}square-root start_ARG italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_H start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT italic_T end_ARG-regret up to logarithmic factors, which is near-optimal. To the best of our knowledge, OPPO is the first provably efficient policy optimization algorithm that explicitly incorporates exploration.
To answer this question, we propose the first policy optimization algorithm that incorporates exploration in a principled manner. In detail, we develop an Optimistic variant of the PPO algorithm, namely OPPO. Our algorithm is also closely related to NPG and TRPO. At each update, OPPO solves a Kullback-Leibler (KL)-regularized policy optimization subproblem, where the linear component of the objective function is defined using the action-value function. As is shown subsequently, solving such a subproblem corresponds to one iteration of infinite-dimensional mirror descent (Nemirovsky and Yudin, 1983) or dual averaging (Xiao, 2010), where the action-value function plays the role of the gradient. To encourage exploration, we explicitly incorporate a bonus function into the action-value function, which quantifies the uncertainty that arises from only observing finite historical data. Through uncertainty quantification, such a bonus function ensures the (conservative) optimism of the updated policy. Based on NPG, TRPO, and PPO, OPPO only augments the action-value function with the bonus function in an additive manner, which makes it easily implementable in practice.
The policy improvement step defined in (3.2) corresponds to one iteration of NPG (Kakade, 2002), TRPO (Schulman et al., 2015), and PPO (Schulman et al., 2017). In particular, PPO solves the same KL-regularized policy optimization subproblem as in (3.2) at each iteration, while TRPO solves an equivalent KL-constrained subproblem. In the special case where the reward function rhk−1subscriptsuperscript𝑟𝑘1ℎr^{k-1}_{h}italic_r start_POSTSUPERSCRIPT italic_k - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT is linear in the feature map ϕhk−1subscriptsuperscriptitalic-ϕ𝑘1ℎ\phi^{k-1}_{h}italic_ϕ start_POSTSUPERSCRIPT italic_k - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT defined subsequently, which implies that the Q-function Qhπk−1,k−1subscriptsuperscript𝑄superscript𝜋𝑘1𝑘1ℎQ^{\pi^{k-1},k-1}_{h}italic_Q start_POSTSUPERSCRIPT italic_π start_POSTSUPERSCRIPT italic_k - 1 end_POSTSUPERSCRIPT , italic_k - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT is also linear in ϕhk−1subscriptsuperscriptitalic-ϕ𝑘1ℎ\phi^{k-1}_{h}italic_ϕ start_POSTSUPERSCRIPT italic_k - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT, the updated policy πksuperscript𝜋𝑘\pi^{k}italic_π start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT can be equivalently obtained by one iteration of NPG when the policy is parameterized by an energy-based distribution (Agarwal et al., 2019; Wang et al., 2019). Such a policy improvement step can also be cast as one iteration of infinite-dimensional mirror descent (Nemirovsky and Yudin, 1983) or dual averaging (Xiao, 2010), where the Q-function plays the role of the gradient (Liu et al., 2019; Wang et al., 2019).
step with α→∞→𝛼\alpha\rightarrow\inftyitalic_α → ∞ corresponds to one step of policy iteration (Sutton and Barto, 2018), which converges to the globally optimal policy π∗superscript𝜋\pi^{*}italic_π start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT within K=H𝐾𝐻K=Hitalic_K = italic_H episodes and hence equivalently induces an H2superscript𝐻2H^{2}italic_H start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-regret. However, in the realistic setting, the Q-function Qhπk−1,k−1subscriptsuperscript𝑄superscript𝜋𝑘1𝑘1ℎQ^{\pi^{k-1},k-1}_{h}italic_Q start_POSTSUPERSCRIPT italic_π start_POSTSUPERSCRIPT italic_k - 1 end_POSTSUPERSCRIPT , italic_k - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT in (3.1)-(3.3) is replaced by the estimated Q-function Qhk−1subscriptsuperscript𝑄𝑘1ℎQ^{k-1}_{h}italic_Q start_POSTSUPERSCRIPT italic_k - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT in Line 6 of Algorithm 1, which is obtained by the policy evaluation step defined in (3.1). As a result of the estimation uncertainty that arises from only observing finite historical data, it is indeed impossible to do better than the T𝑇\sqrt{T}square-root start_ARG italic_T end_ARG-regret even in the tabular setting (Jin et al., 2018), which is shown to be an information-theoretic lower bound. In the linear setting, OPPO attains such a lower bound in terms of the total number of steps T=H⁢K𝑇𝐻𝐾T=HKitalic_T = italic_H italic_K. In other words, in the stationary setting, being “conservatively” greedy suffices to achieve sample-efficiency, which complements its advantages in terms of robustness in the more challenging setting with adversarially chosen reward functions.
B
In this section we compare these specialized forms of compression on their respective hardware in terms of absolute performance to identify the most promising compute concepts for DNNs.
Figure 6 shows test accuracy over throughput of the FINN data-flow architectures mapped to a XILINX Ultra96 FPGA using different bit combinations.
Figure 8: Throughput-accuracy trade-off of different compression methods for different processor architectures (CPU, FPGA, GPU) on the CIFAR-10 task.
Notably, whilst fundamentally different in architecture, from a system-level view these three processors, namely ARM Cortex-A57 CPU, NVIDIA Nano GPU, and XILINX Ultra96 FPGA, are comparable as they all exhibit a power consumption in the range of about 5 Watts.
We evaluate the inference throughput of the compressed models on an ARM CPU (Section 5.2.1), Xilinx FPGA (Section 5.2.2) and an embedded NVIDIA GPU (Section 5.2.3).
C
By Lemma 2.3, 𝒰rsubscript𝒰𝑟\mathcal{U}_{r}caligraphic_U start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is a good cover of Br⁢(X,E)subscript𝐵𝑟𝑋𝐸B_{r}(X,E)italic_B start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ( italic_X , italic_E ). Hence, by the nerve lemma (see [49, Corollary 4G.3]), Br⁢(X,E)subscript𝐵𝑟𝑋𝐸B_{r}(X,E)italic_B start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ( italic_X , italic_E ) is homotopy equivalent to the nerve of 𝒰rsubscript𝒰𝑟\mathcal{U}_{r}caligraphic_U start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT, which is the same as the Čech complex Cˇr⁢(X,E)subscriptˇC𝑟𝑋𝐸\check{\mathrm{C}}_{r}(X,E)overroman_ˇ start_ARG roman_C end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ( italic_X , italic_E ). By Proposition 2.2, Cˇr⁢(X,E)=VR2⁢r⁢(X).subscriptˇC𝑟𝑋𝐸subscriptVR2𝑟𝑋\check{\mathrm{C}}_{r}(X,E)=\mathrm{VR}_{2r}(X).overroman_ˇ start_ARG roman_C end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ( italic_X , italic_E ) = roman_VR start_POSTSUBSCRIPT 2 italic_r end_POSTSUBSCRIPT ( italic_X ) . This concludes the proof.
One main contribution of this paper is establishing a precise relationship (i.e. a filtered homotopy equivalence) between the Vietoris-Rips simplicial filtration of a metric space and a more geometric (or extrinsic) way of assigning a persistence module to a metric space, which consists of first isometrically embedding it into a larger space and then considering the persistent homology of the filtration obtained by considering the resulting system of nested neighborhoods of the original space inside this ambient space. These neighborhoods, being also metric (and thus topological) spaces, permit giving a short proof of the Künneth formula for Vietoris-Rips persistent homology.
In Section 3, we construct a category of metric pairs. This category will be the natural setting for our extrinsic persistent homology. Although being functorial is trivial in the case of Vietoris-Rips persistence, the type of functoriality which one is supposed to expect in the case of metric embeddings is a priori not obvious. We address this question in Section 3 by introducing a suitable category structure.
In this section we consider a certain strong variant of the filling radius satisfying equation (11) which arises from the notion of persistent homology.
One of the insights leading to the notion of persistent homology associated to metric spaces was considering neighborhoods of a metric space in a nice (for example Euclidean) embedding [71]. In this section we formalize this idea in a categorical way.
D
Other recent approaches include DimReader [45], where the authors create so-called generalized axes for non-linear DR methods, but besides explaining a single dimension at a time, it is currently unclear how exactly it can be used in an interactive exploration scenario; and
We start by executing a grid search and, after a few seconds, we are presented with 25 representative projections. As we notice that the projections lack high values in continuity, we choose to sort the projections based on this quality metric for further investigation. Next, as the projections are quite different and none of them appears to have a clear advantage over the others, we pick one with good values for all the rest of the quality metrics (i.e., greater than 40%). The overview in Figure 7(a) shows the selected projection with three clear clusters of varying sizes (marked with C1, C2, and C3). However, the labels seem to be mixed in all of them. That means either the projections are not very good, or the labels are simply very hard to separate. By analyzing the Shepard Heatmap (Figure 7(b)), it seems that there is a distortion in how the projection represents the original N-D distances: the darker cells of the heatmap are above the diagonal and concentrated near the origin, which means that the lowest N-D distances (up to 30% of the maximum) have been represented in the projection with a wide range of 2-D distances (up to 60% of the maximum). While it may be argued that the data is too spread in the projection, we must always consider that t-SNE’s goal is not to preserve all pairwise distances, but only close neighborhoods. The projection has used most of its available 2-D space to represent (as best as possible) the smallest N-D distances, which can be considered a good trade-off for this specific objective. In the following paragraphs, we concentrate on some of the goals described in Subsection 4.3 and Subsection 4.4 for each of the three clusters.
FocusChanger [50] empowers users to perform local analyses by setting Points of Interest (POIs) in a linear projection, which is then updated to enhance the representation of the selected POIs. When hovering over specific points, the information of true neighborhood of other points is mapped to the saturation of the color. This allows for a simple mechanism of quality assessment, but hurts the possibility of using color for other mappings and requires pointwise interaction. The used projections are linear and, thus, potentially not as representative and useful as t-SNE. Similar to Andromeda, it relies on the possibility of quickly updating them, which might not be currently feasible with t-SNE.
Praxis [46], with two methods—backward and forward projection—but it requires fast out-of-sample extensions which are not available for the original t-SNE.
After the analysis, we decided on GEP mainly because it has a good overlap of functionalities with t-viSNE, is well-known, available online, and works correctly with user-provided data. VisCoDeR [22], for example, also provides an overlap of features, but the focus of the tool and the tasks it supports—the comparison of DR methods—is very different from the focus of our experiment. Clustervision [51], on the other hand, did not work when we tried to load our own data sets).
C
As previously mentioned, an ever-growing amount of new bio-inspired optimization techniques has been proposed in recent decades (see Figure 1). This overwhelming number of alternatives could make it difficult to choose an appropriate option for a given optimization problem. The vast number of proposals not only casts doubt on the convenience of choosing one or another algorithm but has also produced solvers that, even if relying on different metaphors, are mathematically too similar to already existing optimization algorithms. In other words, despite the diversity of methods considering their natural inspiration, such diversity does not hold as far as mathematical differences are concerned, as exposed by recent studies [13]. As we have mentioned in the introduction, this metaphor-driven research trend has been already denounced in several contributions [9, 10], which have unleashed hot debates around specific meta-heuristic schemes that remain unresolved to date [11, 12], and with a growing problem when important challenges are not addressed and if more and more biological inspirations are maintained as we can observe in 2024 with more than 500 proposals.
Particular reasons aside, some algorithms are not created to solve problems and provide a practical advantage, but mainly to be published and gain notoriety without any consideration for their lack of algorithmic novelty and innovation. Examples of this controversy can be found in [14], as authors state this problem even in the title of the work. In the previous work, authors “provide compelling evidence that the grey wolf, the firefly, and the bat algorithms are not novel, but a reiteration of ideas introduced first for particle swarm optimization and reintroduced years later using new natural metaphors”. Then, they rewrite these highly cited papers in terms of PSO, and conclude that “they create confusion because they hide their strong similarities with existing PSO algorithms … these three algorithms are unnecessary since they do not add anything new to the tools that can be used to tackle optimization problems”.
In [24], the authors claim that grey wolf, moth-flame, whale, firefly, bat, and antlion algorithms are not novel algorithms, and their inspiration has been in the literature for years. To assert this, the authors present a rigorous, component-based analysis of each algorithm that reveals evidences about them: these algorithms are variants of PSO and evolutionary strategies.
We further elaborate on the above statement: our literature analysis revealed that the majority of proposals (more than a half, 60%) generate new solutions based on differential vector forces over existing ones, as in the classical PSO or DE. A complementary analysis can be done by departing from this observation towards discriminating which of the classical algorithms (PSO, DE, GA, ACO, ABC or SA) can be declared to be most similar to modern approaches. The results of this analysis are conclusive: 23% of all reviewed algorithms (122 out of 518) were found to be so influenced by classical algorithms that, without their biological inspiration, they could be regarded as incremental variants. The other 396 solvers (about 77%) have enough differences to be considered a new proposal by themselves, instead of another version of existing classical algorithms. But, we must emphasize that in these new algorithms there exists a lack of originality or justification in a significant percentage of cases. We must emphasize that in these new algorithms there exists a lack of justification due to the lack of comparison with the state of the art and the lack of real interest in achieving reasonable levels of quality from the perspective of the optimization of well-known problems in recent competitions.
A critical point of reflection associated with this explosion of proposals has been that novel metaphors do not lead to new solvers, and that comparisons undergo serious methodological problems. Although there are increasingly more bio-inspired algorithms, many of them rely on so-claimed novel metaphors that do not create any innovative bio-inspired solvers. In addition, comparisons have been often inadequate, leading to problems of reproducibility and applicability. This problem has captured the interest of other researchers, leading to several papers on various aspects related to bad comparisons and the increasing number of unoriginal proposals, even to the point of not accepting completely new proposals with quality marks. As we have mentioned, we emphasize that in these new algorithms there exists a lack of justification together with the lack of comparison with the state of the art and the lack of real interest in achieving reasonable levels of quality from the perspective of the optimization of well-known problems in recent competitions. Good methodological practices must be followed in forthcoming studies when designing, describing, and comparing new algorithms.
A
The high-level information exploitation can be also regarded as a promotion of the GAE with shallow architecture.
It should be emphasized that a large k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT frequently leads to capture the wrong information.
Classical clustering models work poorly on large scale datasets. Instead, DEC and SpectralNet work better on the large scale datasets. Although GAE-based models (GAE, MGAE, and GALA) achieve impressive results on graph type datasets, they fail on the general datasets, which is probably caused by the fact that the graph is constructed by an algorithm rather than prior information. If the graph is not updated, the contained information is low-level. The adaptive learning will induce the model to exploit the high-level information. In particular, AdaGAE is stable on all datasets.
(2) As we utilize GAE to exploit the high-level information to construct a desirable graph, we find that the model suffers from a severe collapse due to the simple update of the graph. We analyze the degeneration theoretically and experimentally to understand the phenomenon. We further propose a simple but effective strategy to avoid it.
2) It helps to correct the wrong links among samples that are caused by the low-level relationships.
D
Domain-scan and IPv4-scan both show that the number of spoofable ASes grows with the overall number of the ASes in the Internet, see Figure 1. Furthermore, there is a correlation between fraction of scanned domains and ASes. Essentially the more domains are scanned, the more ASes are covered, and more spoofable ASes are discovered; see Figure 7. This result is of independent interest as it implies that one can avoid scanning the IPv4 and instead opt for domains-scan, obtaining a good enough approximation. This not only reduces the volume of traffic needed to carry out studies but also makes the study much more efficient.
Domain-scan and IPv4-scan both show that the number of spoofable ASes grows with the overall number of the ASes in the Internet, see Figure 1. Furthermore, there is a correlation between fraction of scanned domains and ASes. Essentially the more domains are scanned, the more ASes are covered, and more spoofable ASes are discovered; see Figure 7. This result is of independent interest as it implies that one can avoid scanning the IPv4 and instead opt for domains-scan, obtaining a good enough approximation. This not only reduces the volume of traffic needed to carry out studies but also makes the study much more efficient.
Further, to avoid single point of failure it is recommended that the Name servers of a domain are hosted in multiple networks. This is also our observation when correlating between domains and ASes. Essentially we find that when testing one domain for each server we can obtain different results, depending on the AS that the server is hosted on.
There is a strong correlation between the AS size and the enforcement of spoofing, see Figure 13. Essentially, the larger the AS, the higher the probability that our tools identify that it does not filter spoofed packets. The reason can be directly related to our methodologies and the design of our study: the larger the network the more services it hosts. This means that we have more possibilities to test if spoofing is possible: for instance, we can identify a higher fraction of servers with a globally incremental IPID counters, which are not “load balanced”. In Figure 14 we plot the statistics of the tested networks according to their size and type. The results show a correlation between the size of the network and its type. For instance, most NSP networks are large, with CIDR/6. This is aligned with our finding that among NSP networks there was the highest number of spoofable networks.
Figure 8. Fraction of domains hosted in multiple ASes. We check how many ASes host services of one domain: 70% of the domains are hosted in one or two ASes.
B
This paper also presents the NN ensemble created in the same way as with SVMs. In the NN ensemble, T−1𝑇1T-1italic_T - 1 skill networks are trained using one batch each for training. Each model is assigned a weight βisubscript𝛽𝑖\beta_{i}italic_β start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT equal to its accuracy on batch T−1𝑇1T-1italic_T - 1. The weighted sum of the model class scores is the ensemble class prediction. The model is then tasked to classify samples from batch T𝑇Titalic_T.
Figure 2: Neural network architectures. (A.) The batches used for training and testing illustrate the training procedure. The first T−1𝑇1T-1italic_T - 1 batches are used for training, while the next unseen batch T𝑇Titalic_T is used for evaluation. When training the context network, subsequences of the training data are selected to be processed recurrently, indicated by the labels s𝑠sitalic_s through p𝑝pitalic_p. In all cases, training data is obtained only from the first T−1𝑇1T-1italic_T - 1 batches of data. (B.) A feature vector is input to a collection of SVMs, one trained on each prior batch. Each SVM output is weighted by its corresponding coefficient, β𝛽\betaitalic_β, and the weighted sum of the output class predictions is taken to be the output, 𝐲^normal-^𝐲\hat{\mathbf{y}}over^ start_ARG bold_y end_ARG, of the ensemble. (C.) A schematic of the skill model shows feedforward progression of input through two hidden layers 𝐬𝐬\mathbf{s}bold_s and 𝐝𝐝\mathbf{d}bold_d followed by the output layer 𝐲^normal-^𝐲\hat{\mathbf{y}}over^ start_ARG bold_y end_ARG. (D.) A schematic of the context+skill model introduces a sequential processing of prior samples as a separate processing pathway. For each context batch from s𝑠sitalic_s through p−1𝑝1p-1italic_p - 1, one sample per odor class is chosen as a representative. The context information is then utilized by the “decision-making” layer 𝐝𝐝\mathbf{d}bold_d and is thus integrated into the feedforward pathway.
This paper builds upon previous work with this dataset [7], which used support vector machine (SVM) ensembles. First, their approach is extended to a modern version of feedforward artificial neural networks (NNs) [8]. Context-based learning is then introduced to utilize sequential structure across batches of data. The context model has two parts: (1) a recurrent context layer, which encodes classification-relevant properties of previously seen data, and (2) a feedforward layer, which integrates the context with the current odor stimulus to generate an odor-class prediction. The results indicate improvement from two sources: The use of neural networks in place of SVMs, and the use of context, particularly in cases where a substantial number of context sequences are available for training. Thus, emulation of adaptation in natural systems leads to an approach that can make a difference in real-world applications.
The context+skill NN model builds on the skill NN model by adding a recurrent processing pathway (Fig. 2D). Before classifying an unlabeled sample, the recurrent pathway processes a sequence of labeled samples from the preceding batches to generate a context representation, which is fed into the skill processing layer. The recurrent layers are modified via backpropagation through time, and, in this manner, the recurrent pathway learns to generate representations that support classification. The context system thus transforms samples of recently seen odors into a representation that helps classification on the next time period. This approach is similar to the context+skill technique for opponent modeling and enhanced extrapolation in games [26, 27]; the main difference is that in prior work the approach was based on neuroevolution of agent behavior, whereas in this paper it is implemented via backpropagation to generalize classification performance.
The context processing pathway utilizes the sequential structure of the dataset via recurrent processing. This pathway is incorporated with a feedforward component to define the context+skill model as described above.
C
Note that in the final iteration, when i=t+1𝑖𝑡1i=t+1italic_i = italic_t + 1, we take B=∅𝐵B=\emptysetitalic_B = ∅. Now
12:         if M′superscript𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and M𝑀Mitalic_M are compatible then
M𝑀Mitalic_M and M′superscript𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT are compatible if and only if the union of the corresponding path covers
6:                  if M′superscript𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and M𝑀Mitalic_M are compatible then
6:              if M′superscript𝑀′M^{\prime}italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and M𝑀Mitalic_M are compatible then
B
With this terminology, all states in Q𝑄Qitalic_Q ignore open gates, closed gates, and unmarked and circled letters, so the inductive hypothesis holds trivially for these (in particular, c⋅w⋅𝑐𝑤c\cdot witalic_c ⋅ italic_w and c⋅w~⋅𝑐~𝑤c\cdot\tilde{w}italic_c ⋅ over~ start_ARG italic_w end_ARG are always defined in these cases).
Thus, let w=𝒜w~subscript𝒜𝑤~𝑤w=_{\mathcal{A}}\tilde{w}italic_w = start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT over~ start_ARG italic_w end_ARG for w,w~∈P+𝑤~𝑤superscript𝑃w,\tilde{w}\in P^{+}italic_w , over~ start_ARG italic_w end_ARG ∈ italic_P start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT. We only show the implication
We obtain the cross diagram depicted in Figure 10 and an analogous diagram for w~~𝑤\tilde{w}over~ start_ARG italic_w end_ARG (compare to the action of the adding machine in 2). Thus, we have
For the remaining types of symbols in C𝐶Citalic_C we have the following cross-diagrams, and analogous ones for w~~𝑤\tilde{w}over~ start_ARG italic_w end_ARG:
for 1≤i<k1𝑖𝑘1\leq i<k1 ≤ italic_i < italic_k. Therefore, we can still apply the claim (1) and obtain an analogous cross diagram for w~~𝑤\tilde{w}over~ start_ARG italic_w end_ARG if α⋅u~k⋅𝛼subscript~𝑢𝑘\alpha\cdot\tilde{u}_{k}italic_α ⋅ over~ start_ARG italic_u end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT and β⋅v~k⋅𝛽subscript~𝑣𝑘\beta\cdot\tilde{v}_{k}italic_β ⋅ over~ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT are both defined (in 𝒜𝒜\mathcal{A}caligraphic_A and ℬℬ\mathcal{B}caligraphic_B, respectively). Thus, in this case, γ⋅w~=$k−1(α⋅u~k)D~α(β⋅v~k)D~β\gamma\cdot\tilde{w}=\$^{k-1}(\alpha\cdot\tilde{u}_{k})^{\tilde{D}_{\alpha}}(%
C
This work was supported in part by AFOSR grant [FA9550-18-1-0121], NSF award #1909696, and a gift from Adobe Research. We thank NVIDIA for the GPU donation. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements of any sponsor. We are grateful to Tyler Hayes for agreeing to review the paper at short notice and suggesting valuable edits and corrections for the paper.
Following Selvaraju et al. (2019), we train HINT on the subset with human-based attention maps Das et al. (2017), which are available for 9% of the VQA-CPv2 train and test sets. The same subset is used for VQAv2 too. The learning rate is set to 2×10−52superscript1052\times 10^{-5}2 × 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT and the weight for the HINT loss is set to 2222.
We compare four different variants of HINT and SCR to study the causes behind the improvements including the models that are fine-tuned on: 1) relevant regions (state-of-the-art methods) 2) irrelevant regions 3) fixed random regions and 4) variable random regions. For all variants, we fine-tune a pre-trained UpDn, which was trained on either VQA-CPv2 or VQAv2 for 40 epochs with a learning rate of 10−3superscript10310^{-3}10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT. When fine-tuning with HINT, SCR or our method, we also use the main binary cross entropy VQA loss, whose weight is set to 1111. The batch size is set to 384384384384 for all of the experiments.
Our regularization method, which is a binary cross entropy loss between the model predictions and a zero vector, does not use additional cues or sensitivities and yet achieves near state-of-the-art performance on VQA-CPv2. We set the learning rate to: 2×10−6r2superscript106𝑟\frac{2\times 10^{-6}}{r}divide start_ARG 2 × 10 start_POSTSUPERSCRIPT - 6 end_POSTSUPERSCRIPT end_ARG start_ARG italic_r end_ARG, where r𝑟ritalic_r is the ratio of the training instances used for fine-tuning. The weight for the loss is set to 2222. We report the performance obtained at the 8t⁢hsuperscript8𝑡ℎ8^{th}8 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT epoch.
We compare the baseline UpDn model with HINT and SCR-variants trained on VQAv2 or VQA-CPv2 to study the causes behind the improvements. We report mean accuracies across 5555 runs, where a pre-trained UpDn model is fine-tuned on subsets with human attention maps and textual explanations for HINT and SCR respectively. Further training details are provided in the Appendix.
B
A privacy policy is a legal document that an organisation uses to disclose how they collect, analyze, share, and protect users’ personal information. Legal jurisdictions around the world require organisations to make their privacy policies readily available to their users, and laws such as General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) place specific expectations upon privacy policies. However, although many internet users have concerns about their privacy Madden (2017), most fail to understand privacy policies Meiselwitz (2013). Studies show that privacy policies require a considerable investment in time to read Obar and Oeldorf-Hirsch (2018) and estimate that it would require approximately 200 hours to read all the privacy policies that an average person would come across every year McDonald and Cranor (2008).
Other corpora similar to OPP-115 Corpus have enabled research on privacy practices. The PrivacyQA corpus contains 1,750 questions and expert-annotated answers for the privacy question answering task (Ravichander et al., 2019). Similarly, Lebanoff and Liu (2018) constructed the first corpus of human-annotated vague words and sentences in privacy policies and studied automatic vagueness detection. Sathyendra et al. (2017) presented a dataset and developed a model to automatically identify and label opt-out choices offered in privacy policies. Similarly, Zimmeck et al. (2019) released a set of over 400k URLs to Android app privacy policy pages collected by crawling the Google Play store. Amos et al. (2020) collected privacy policies from around 130,000 websites from over two decades and analysed the evolution of the online privacy landscape. Finally, Nokhbeh Zaeem and Barber (2021) collected a corpus of around 100k privacy policies using the domains from DMOZ, a website which maintained categories of websites on the internet.
Prior collections of privacy policy corpora have led to progress in privacy research. Wilson et al. (2016) released the OPP-115 Corpus, a dataset of 115 privacy policies with manual annotations of 23k fine-grained data practices, and they created a baseline for classifying privacy policy text into one of ten categories. The corpus was used to train models to extract opt-out choices from privacy policies (Sathyendra et al., 2016), to automatically identify policies on websites and find compliance issues (Story et al., 2019), and to classify privacy practices and answer privacy related non-factoid questions (Harkous et al., 2018).
For the question answering task, we leveraged the PrivacyQA corpus (Ravichander et al., 2019). PrivacyQA consists of 1,750 questions about the contents of privacy policies from 35 privacy documents. While crowdworkers were asked to come up with privacy related questions based on public information about an application from the Google Play Store, legal experts were recruited to identify relevant evidence within respective privacy policies that answered the question asked by the crowdworkers. The goal of the question answering task is to identify a set sentences in the privacy policy that has information relevant to the question. Ravichander et al. (2019) divided the corpus into 1,350 questions for training and validation and 400 questions for testing where each question in the test set is annotated by at least three experts. We fine-tuned PrivBERT on the training set as a binary classification task on each question-answer sentence pair to identify if the sentence is evidence for the question or not. We trained the model with a dropout of 0.2 and a learning rate of 3e-6 with the positive and negative classes weighted in the ratio 8:1 during training. We used sentence level F1 as the evaluation metric as described by Ravichander et al. (2019), where precision and recall are calculated by measuring the overlap between the predicted sentences and gold standard sentences.
Natural language processing (NLP) provides an opportunity to automate the extraction of salient details from privacy policies, thereby reducing human effort and enabling the creation of tools for internet users to understand and control their online privacy. Existing research has achieved some success using expert annotated corpora of a few hundred or a few thousand privacy policies Wilson et al. (2016); Zimmeck et al. (2019); Ramanath et al. (2014), but issues of accuracy, scalability and generalization remain. More importantly, annotations in the privacy policy domain are expensive. Privacy policies are difficult to understand and many tasks such as privacy practice classification (Wilson et al., 2016), privacy question answering (Ravichander et al., 2019), vague sentence detection (Lebanoff and Liu, 2018), and detection of compliance issues (Zimmeck et al., 2019) require skilled legal experts to annotate the dataset. In contrast, approaches involving large amounts of unlabeled privacy policies remain relatively unexplored.
D
We have then several options to manipulate this point as shown in Figure 3(c.3): we can remove the point’s instance entirely from the data set or merge a set of points into a new one, which receives either their mean or median values per feature.
and (v) we track the history of the previously stored stacking ensembles in StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(b) and compare their performances against the active stacking ensemble—the one not yet stored in the history—in StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(c).
Figure 6: The process of exploration of distinct algorithms in hypotheticality stance analysis. (a) presents the selection of appropriate validation metrics for the specification of the data set. (b) aggregates the information after the exploration of different models and shows the active ones which will be used for the stack in the next step. (c) presents the per-class performance of all the models vs. the active ones per algorithm.
The history manager saves the aforementioned manipulations or restores the previous saved step on demand.
Analysts might also want to step back to a specific previous stage in case they reached a dead end in the exploration of algorithms and models (G2).
C
(v′,[323])superscript𝑣′delimited-[]323(v^{\prime},[323])( italic_v start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , [ 323 ] ) is adjacent to (v′,f′)superscript𝑣′superscript𝑓′(v^{\prime},f^{\prime})( italic_v start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ), to
(v′,[323])superscript𝑣′delimited-[]323(v^{\prime},[323])( italic_v start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , [ 323 ] ) is adjacent to (v′,f′)superscript𝑣′superscript𝑓′(v^{\prime},f^{\prime})( italic_v start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ), to
We have that (v,[010])𝑣delimited-[]010(v,[010])( italic_v , [ 010 ] ), (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and (v,[313])𝑣delimited-[]313(v,[313])( italic_v , [ 313 ] )
3¯¯3\overline{3}over¯ start_ARG 3 end_ARG, and to (v,[323])𝑣delimited-[]323(v,[323])( italic_v , [ 323 ] ), and so
p⁢(v′,[323])𝑝superscript𝑣′delimited-[]323p(v^{\prime},[323])italic_p ( italic_v start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , [ 323 ] ) is 2222.
C
In Experiment II: Dialogue Generation, we use Persona [Zhang et al., 2018] and Weibo, regarding building a dialogue model for a user as a task. Persona is a personalized dialogue dataset with 1137/99/100 users for meta-training/meta-validation/meta-testing. Each user has 121 utterances on average. Weibo is a personalized dialogue dataset collected from Weibo conversations with 371/40/38 users for meta-training/meta-validation/meta-testing. Each user has 1200 utterances on average.
In text classification experiments, we use the CNN proposed in [Bao et al., 2020] as the base model and follow the hyperparameter settings.
In text classification experiment, we use accuracy (Acc) to evaluate the classification performance.
In Persona, we use pre-trained Glove embedding [Pennington et al., 2014]. In Weibo, we use Gensim [Rehurek and Sojka, 2010]. We follow the other hyperparameter settings in [Madotto et al., 2019].
Some works use MAML for few-shot text classification, such as relation classification [Obamuyide and Vlachos, 2019] and topic classification [Bao et al., 2020].
A
\text{c}}}{2}\cos\alpha\sin\beta)}}\right]^{T},… , italic_e start_POSTSUPERSCRIPT italic_j divide start_ARG 2 italic_π end_ARG start_ARG italic_λ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT end_ARG ( divide start_ARG ( italic_M - 1 ) italic_d start_POSTSUBSCRIPT cyl end_POSTSUBSCRIPT end_ARG start_ARG 2 end_ARG roman_cos italic_β + italic_R start_POSTSUBSCRIPT cyl end_POSTSUBSCRIPT roman_sin divide start_ARG ( italic_N - 1 ) roman_Δ italic_ϕ start_POSTSUBSCRIPT c end_POSTSUBSCRIPT end_ARG start_ARG 2 end_ARG roman_sin italic_α roman_sin italic_β + italic_R start_POSTSUBSCRIPT cyl end_POSTSUBSCRIPT roman_cos divide start_ARG ( italic_N - 1 ) roman_Δ italic_ϕ start_POSTSUBSCRIPT c end_POSTSUBSCRIPT end_ARG start_ARG 2 end_ARG roman_cos italic_α roman_sin italic_β ) end_POSTSUPERSCRIPT ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ,
Based on the designed CCA codebook, the joint subarray partition and AWV selection (SPAS) algorithm is developed in this section to solve the beam tracking problem in (13).
Tracking the AOAs and AODs is essential for beam tracking, which is closely connected with the position and attitude of the t-UAVs and r-UAV. The position and attitude compose the UAV’s motion state information (MSI). In this section, the MSI prediction based AOAs and AODs estimation scheme and the protocol for beam tracking are introduced in Section IV-A. Then the TE estimation algorithm which exploits the MSI prediction error is proposed in Section IV-B. The TE-aware CCA codebook based 3D beamwidth selection algorithm is developed based on the TE estimation to achieve effective beam tracking in Section IV-C.
The rest of this paper is as follows. In Section II, the system model is introduced. In Section III, the CCA codebook design and the codebook-based joint subarray partition and AWV selection algorithms are proposed. Next, the TE-aware codebook-based beam tracking with 3D beamwidth control is further proposed in Section IV. Simulation results are given in Section V, and finally Section VI concludes this paper.
The CCA codebook based SPAS algorithm is proposed in the previous section to solve the joint CCA subarray partition and AWV selection problem. In this section, the TE-aware beam tracking problem is addressed based on the CCA codebook based SPAS algorithm.
C
Presburger formulas that capture all possible sizes of complete simple A|Bconditional𝐴𝐵A|Bitalic_A | italic_B-biregular graphs,
on the matrices that specify the graph constraints. The restriction is that they are “simple matrices”.
In this section we will show how to reduce the non-simple matrices to simple matrices for biregular graphs.
For a pair of simple matrices A|Bconditional𝐴𝐵A|Bitalic_A | italic_B (with the same number of rows),
where the matrices A𝐴Aitalic_A and B𝐵Bitalic_B may have multiple colors, but are what we call simple matrices,
B
Q‡⁢(x)=∫σ⁢(x;θ)⁢dν¯⁢(θ)superscript𝑄‡𝑥𝜎𝑥𝜃differential-d¯𝜈𝜃Q^{\ddagger}(x)=\int\sigma(x;\theta)\,{\mathrm{d}}\underline{\nu}(\theta)italic_Q start_POSTSUPERSCRIPT ‡ end_POSTSUPERSCRIPT ( italic_x ) = ∫ italic_σ ( italic_x ; italic_θ ) roman_d under¯ start_ARG italic_ν end_ARG ( italic_θ ). We assume that Dχ2⁢(ν¯∥ν0)<∞subscript𝐷superscript𝜒2conditional¯𝜈subscript𝜈0D_{\chi^{2}}(\underline{\nu}\,\|\,\nu_{0})<\inftyitalic_D start_POSTSUBSCRIPT italic_χ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( under¯ start_ARG italic_ν end_ARG ∥ italic_ν start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) < ∞ and ν¯⁢(θ)>0¯𝜈𝜃0\underline{\nu}(\theta)>0under¯ start_ARG italic_ν end_ARG ( italic_θ ) > 0 for any θ∈ℝD𝜃superscriptℝ𝐷\theta\in\mathbb{R}^{D}italic_θ ∈ blackboard_R start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT.
Under Assumptions 4.1, 4.2, and 6.1, it holds for η=α−2𝜂superscript𝛼2\eta=\alpha^{-2}italic_η = italic_α start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT that
Upon telescoping (5.5) and setting η=α−2𝜂superscript𝛼2\eta=\alpha^{-2}italic_η = italic_α start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT, we obtain that
Under Assumptions 4.1 and 4.2, it holds for any k≤T/ϵ⁢(k∈ℕ)𝑘𝑇italic-ϵ𝑘ℕk\leq T/\epsilon\ (k\in\mathbb{N})italic_k ≤ italic_T / italic_ϵ ( italic_k ∈ blackboard_N ) that
Under Assumptions 4.1, 4.2, and 6.3, it holds for η=α−2𝜂superscript𝛼2\eta=\alpha^{-2}italic_η = italic_α start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT that
D
Our approach with the Transformer base setting brings about more improvements on the English-German task than that on the English-French task. We conjecture that maybe because the performance on the English-French task using a large dataset (∼similar-to\sim∼36363636M sentence pairs) may rely more on the capacity of the model (i.e. the number of parameters) than on the complexity of the modeling function (i.e. depth of the model, non-linearity strength per-layer, etc.). With the Transformer Big model which contains more parameters than the Transformer Base, the improvement on En-Fr (+1.191.19+1.19+ 1.19) is larger than that on En-De (+0.750.75+0.75+ 0.75), with ∼similar-to\sim∼4.54.54.54.5M sentence pairs.
Considering that the layer stacks of the 6-layer Transformer are not that deep and vanilla RNNs can play a similar role as LSTMs, is it possible to train the model with a depth-wise RNN rather than the depth-wise LSTM? We first study using different approaches (Transformer, the depth-wise RNN and the depth-wise LSTM) for the 6-layer Transformer, and results are shown in Table 2.
The encoder layer with the depth-wise LSTM unit, as shown in Figure 2, first performs the self-attention computation, then the depth-wise LSTM unit takes the self-attention results and the output and the cell state of the previous layer to compute the output and the cell state of the current layer.
When using the depth-wise RNN, the architecture is quite similar to the standard Transformer layer without residual connections but using the concatenation of the input to the encoder/decoder layer with the output(s) of attention layer(s) as the input to the last FFN sub-layer. Table 2 shows that the 6-layer Transformer with the depth-wise RNN is able to converge, but its performance is much worse than the model with the depth-wise LSTM (and also much worse than the vanilla Transformer) with depth-wise LSTM outperforming the vanilla Transformer, suggesting the importance of the gating mechanisms of the depth-wise LSTM. The decoding speed of our baseline vanilla Transformer implementation (750.58750.58750.58750.58 sentences/s) is quite fast, and is 1.121.121.121.12 times as fast as the depth-wise LSTM approach, but our approach leads to a higher BLEU score than the baseline, and as shown in Table 6, our approach indeed requires fewer parameters and brings about faster decoding speed than the vanilla Transformer for a comparable BLEU score.
We show that the 6-layer Transformer using depth-wise LSTM can bring significant improvements in both WMT tasks and the challenging OPUS-100 multilingual NMT task. We show that depth-wise LSTM also has the ability to support deep Transformers with up to 24242424 layers, and that the 12-layer Transformer using depth-wise LSTM already performs at the level of the 24-layer vanilla Transformer.
A
𝖥𝖮𝖥𝖮\mathsf{FO}sansserif_FO-interpetation that is surjective and continuous from X𝑋Xitalic_X to Y𝑌Yitalic_Y,
\uptheta,\mathcal{L}^{\prime}\right\rangleitalic_f : ⟨ italic_X , roman_τ , caligraphic_L ⟩ → ⟨ italic_Y , roman_θ , caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⟩ is
1.2.2]. A map f:(X,τ)→(Y,θ):𝑓→𝑋τ𝑌θf\colon(X,\uptau)\to(Y,\uptheta)italic_f : ( italic_X , roman_τ ) → ( italic_Y , roman_θ )
Recall that (Y,θ)𝑌θ(Y,\uptheta)( italic_Y , roman_θ ) is a pre-spectral subspace of (X,τ)𝑋τ(X,\uptau)( italic_X , roman_τ )
whenever (Y,θ)𝑌θ(Y,\uptheta)( italic_Y , roman_θ ) is a pre-spectral space such that Y⊆X𝑌𝑋Y\subseteq Xitalic_Y ⊆ italic_X,
C
To demonstrate a quantitative comparison with the state-of-the-art approaches, we evaluate the rectified images based on the PSNR (peak signal-to-noise ratio), SSIM (structural similarity index), and the proposed MDLD (mean distortion level deviation). All the comparison methods are used to conduct the distortion rectification on the test dataset including 2,000 distorted images. For the PSNR and SSIM, we compute these two metrics using the pixel difference between each rectified image and the ground truth image. For the MDLD, we first exploit the estimated distortion parameters to obtain all distortion levels of the test distorted image based on Eq. 5. Then, the value of MDLD can be calculated by the difference between estimated distortion levels and the ground truth distortion levels based on Eq. 21. Note that the generated-based methods such as Li [11] and Liao [12] directly learn the transformation manner of the pixel mapping instead of estimating the distortion parameters, so we only evaluate these two methods in terms of the PSNR and SSIM.
As listed in Table II, our approach significantly outperforms the compared approaches in all metrics, including the highest metrics on PSNR and SSIM, as well as the lowest metric on MDLD. Specifically, compared with the traditional methods [23, 24] based on the hand-crafted features, our approach overcomes the scene limitation and simple camera model assumption, showing more promising generality and flexibility. Compared with the learning distortion rectification methods [8][11][12], which omit the prior knowledge of the distortion, our approach transfers the heterogeneous estimation problem into a homogeneous one, eliminating the implicit relationship between image features and predicted values in a more explicit expression. As benefits of the effective ordinal supervision and guidance of distortion information during the learning process, our approach outperforms Liao [12] by a significant margin, with approximately 23% improvement on PSNR and 17% improvement on SSIM. Besides the high quality of the rectified image, our approach can obtain the accurate distortion parameters of a distorted image, which is crucial for the subsequent tasks such as the camera calibration. However, the generation-based methods [11][12] mainly focus on the pixel reconstruction of a rectified image and ignore the parameter estimation.
In contrast to the long history of traditional distortion rectification, learning methods began to study distortion rectification in the last few years. Rong et al. [8] quantized the values of the distortion parameter to 401 categories based on the one-parameter camera model [22] and then trained a network to classify the distorted image. This method achieved the deep distortion rectification for the first time, while the coarse values of parameters and the simplified camera model severely influenced its generalization ability. To expand the application, Yin et al. [9] rectified the distortion in terms of the fisheye camera model using a multi-context collaborative deep network. However, their correction results heavily rely on the semantic segmentation results, leading to a strong cascading effect. Xue et al. [10] improved the performance of distortion parameter estimation by distorted lines. In analogy to traditional methods [21, 23, 24], the extra introduced hand-crafted features limit the robustness of this algorithm and decrease the efficiency of the rectification. Note that the above methods directly estimates distortion parameters from a single distorted image, such an implicit and heterogeneous calibration objective hinders sufficient learning concerning the distortion information. To solve the imbalance problem in the distortion parameter estimation, recent works [11, 12, 13] optimized the image reconstruction loss rather than the parameters regression loss for rectification. However, their models are based on the parameter-free mechanism and cannot estimate the distortion parameters, which are important for the structure from motion and camera calibration. Manuel et al. [14] proposed a parameterization scheme for the extrinsic and intrinsic camera parameters, but they only considered one distortion coefficient for the rectification and cannot apply the algorithm to more complicated camera models.
To demonstrate a quantitative comparison with the state-of-the-art approaches, we evaluate the rectified images based on the PSNR (peak signal-to-noise ratio), SSIM (structural similarity index), and the proposed MDLD (mean distortion level deviation). All the comparison methods are used to conduct the distortion rectification on the test dataset including 2,000 distorted images. For the PSNR and SSIM, we compute these two metrics using the pixel difference between each rectified image and the ground truth image. For the MDLD, we first exploit the estimated distortion parameters to obtain all distortion levels of the test distorted image based on Eq. 5. Then, the value of MDLD can be calculated by the difference between estimated distortion levels and the ground truth distortion levels based on Eq. 21. Note that the generated-based methods such as Li [11] and Liao [12] directly learn the transformation manner of the pixel mapping instead of estimating the distortion parameters, so we only evaluate these two methods in terms of the PSNR and SSIM.
In this part, we compare our approach with the state-of-the-art methods in both quantitative and qualitative evaluations, in which the compared methods can be classified into traditional methods [23][24] and learning methods [8][11][12]. Note that our approach only requires a patch of the input distorted image to estimate the ordinal distortion.
A
All experiments are performed using the PyTorch platform on a server with eight NVIDIA Tesla V100 GPU cards.
We consider three common deep learning tasks: image classification, natural language processing (NLP), and click-through rate (CTR) prediction for large-batch training evaluation.
Table 6 shows the test perplexity of the three methods with different batch sizes. We can observe that for small batch size, SNGM achieves test perplexity comparable to that of MSGD, and for large batch size, SNGM is better than MSGD. Similar to the results of image classification, SNGM outperforms LARS for different batch sizes.
Table 7 shows the training time per epoch of SNGM with different batch sizes. We can observe that larger batch sizes can reduce the training time, which is similar to the results of image classification tasks.
We further conduct CTR prediction experiments to evaluate SNGM. We train DeepFM [8] on a CTR prediction dataset containing ten million samples that are sampled from the Criteo dataset 777https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/.
A
11111111-approximation for inhomogeneous 2S-MatSup-Poly, with |𝒮|≤2m𝒮superscript2𝑚|\mathcal{S}|\leq 2^{m}| caligraphic_S | ≤ 2 start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT.
3333-approximation for homogeneous 2S-Sup-Poly with |𝒮|≤(n+1)!𝒮𝑛1|\mathcal{S}|\leq(n+1)!| caligraphic_S | ≤ ( italic_n + 1 ) !.
Clustering is a fundamental task in unsupervised and self-supervised learning. The stochastic setting models situations in which decisions must be made in the presence of uncertainty and are of particular interest in learning and data science. The black-box model is motivated by data-driven applications where specific knowledge of the distribution is unknown but we have the ability to sample or simulate from the distribution. To our knowledge, radius minimization has not been previously considered in the two-stage stochastic paradigm. Most prior work in this setting has focused on Facility Location [23, 24, 21, 22, 11, 19, 25]. On similar lines, [1] studies a stochastic k𝑘kitalic_k-center variant, where points arrive independently and each point only needs to get covered with some given probability. 2S-Sup is the natural two-stage counterpart of the well-known Knapsack-Supplier problem, which has a well-known 3333-approximation [14].
The 3333-approximation for 2S-Sup-Poly is presented in Section 3, based on a novel LP rounding technqiue; notably, its approximation ratio matches the lower bound of the non-stochastic counterpart (Knapsack Supplier).
Here (1) captures the budget constraint, and (2) captures the radius covering constraint. If the instance is feasible for the given 2S-Sup-Poly instance, we can solve the LP. The rounding algorithm appears in Algorithm 3.
C
The graph with a generalized weighted adjacency matrix is often used to describe the competitive and cooperative interaction behaviors existing in some scenarios of applications.
So, it is also worth studying the distributed stochastic optimization over the network with the generalized weighted adjacency matrix in the future.
In the most of existing works on the distributed convex optimization, it is assumed that the subgradients are bounded if the local cost
The graph with a generalized weighted adjacency matrix is often used to describe the competitive and cooperative interaction behaviors existing in some scenarios of applications.
II. The structure of the networks among optimizers is modeled by a more general sequence of random digraphs. The sequence of random digraphs is conditionally balanced, and the weighted adjacency matrices are not required to have special statistical properties such as independency with identical distribution, Markovian switching, or stationarity, etc. The edge weights are also not required to be nonnegative at every time instant. By introducing the concept of conditional digraphs and developing the stochastic Lyapunov method for distributed optimization over non-stationary randomly time-varying networks, uniformly conditionally joint connectivity condition is established to ensure the convergence of the distributed stochastic optimization algorithms.
A
Observing from Figure 7(a), the information loss of MuCo increases with the decrease of parameter δ𝛿\deltaitalic_δ. According to Corollary 3.2, each QI value in the released table corresponds to more records with the reduction of δ𝛿\deltaitalic_δ, causing that more records have to be involved for covering on the QI values of long distance. Therefore, the decrease of δ𝛿\deltaitalic_δ enhances the protection but also increases the information loss. In addition, comparing to Figure 7(b), both the information loss and the interval of MuCo are much less than that of Mondrian. Thus, the experiments illustrate that comparing to generalization, MuCo preserves more information utility and enhances the protection at a much smaller cost of information loss.
In this experiment, we use the approach of aggregate query answering [37] to check the information utility of MuCo. We randomly generate 1,000 queries and calculate the average relative error rate for comparison. The sequence of the query is expressed in the following form
In this work, we propose a novel technique, called the Mutual Cover (MuCo), to protect the privacy for microdata publication. The rationale is to make similar records to cover for each other at the minimal cost by perturbing the original QI values according to the random output tables. In this way, MuCo can achieve great protection performance, and the anonymization process is hidden for the adversary. Furthermore, MuCo preserves more information utility than generalization because the distributions of the original QI values are preserved as much as possible and the results of query statement are specific matching tuples rather than groups. Additionally, MuCo avoids the problem of over-protection for identities. The experiments illustrate that MuCo provides impressive privacy protection, little information loss, and accurate query answering.
We observe that the results of MuCo are much better than that of Mondrian and Anatomy. The primary reason is that MuCo retains the most distributions of the original QI values and the results of queries are specific records rather than groups. Consequently, the accuracy of query answering of MuCo is much better and more stable than that of Mondrian and Anatomy. Besides, since the results of queries for MuCo are specific records rather than groups, the relative error rate of MuCo does not increase steadily with the growth of δ𝛿\deltaitalic_δ but fluctuates depending on specific query conditions. Therefore, differing from Mondrian and Anatomy, increasing the level of protection of MuCo has little influence on the query results. In conclusion, MuCo can achieve the same level of protection as generalization does but with less information loss and more accurate query results. Note that, since we use the sum of salary for comparison (the range of salary is from 4 to 718,000), the relative error rates of Mondrian are much larger than some existing works.
Specifically, the query condition contains four random QI attributes, and the sum of salary is the result. We use the same parameters of MuCo and perform Mondrian and Anatomy complying with l𝑙litalic_l-diversity for comparison. Since the generated query conditions are strong stochastic, we report the average values and the variances of relative error rates as given in Figure 8 and Figure 9, respectively.
A
To fully understand which components contribute to PointRend’s performance, we construct our own validation set by randomly selecting 3000 images from original training data to evaluate offline. We will show the step-by-step improvements adopted on PointRend.
Table 2: PointRend’s step-by-step performance on our own validation set (splitted from the original training set). “MP Train” means more points training and “MP Test” means more points testing. “P6 Feature” indicates adding P6 to default P2-P5 levels of FPN for both coarse prediction head and fine-grained point head. “FP16” means mixed precision training.
In the following, we refer the model in the last row (74.3 mAP) of Table 2 as PointRend baseline. The baseline trained on the official training set finally reaches 79.17 and 77.38 mAP on validation and testing set respectively, as shown in Table 1 and Table 3. It surpasses SOLOv2 by a large margin: 6.2, 4.5 and 3.5 mAP respectively for small, medium and large size on validation set. We believe that PointRend’s iteratively rendering process acts as a pivot for generating high-quality masks, especially fine-grained instance boundaries. Due to its superior performance, we only choose PointRend as ensemble candidates for the final submission.
HTC is known as a competitive method for COCO and OpenImage. By enlarging the RoI size of both box and mask branches to 12 and 32 respectively for all three stages, we gain roughly 4 mAP improvement against the default settings in original paper. Mask scoring head Huang et al. (2019) adopted on the third stage gains another 2 mAP. Armed with DCN, GC block and SyncBN training, our HTC with Res2NetR101 backbone yields 74.58 mAP on validation set, as shown in Table 1. However, the convolutional mask heads adopted in all stages bring non-negligible computation and memory costs, which constrain the mask resolution and further limit the segmentation quality for large instances.
Bells and Whistles. MaskRCNN-ResNet50 is used as baseline and it achieves 53.2 mAP. For PointRend, we follow the same setting as Kirillov et al. (2020) except for extracting both coarse and fine-grained features from the P2-P5 levels of FPN, rather than only P2 described in the paper. Surprisingly, PointRend yields 62.9 mAP and surpasses MaskRCNN by a remarkable margin of 9.7 mAP. More Points Test. By increasing the number of subdivision points from default 28 to 70 during inference, we gain another 1.1 mAP with free training cost. Large Backbone. X101-64x4d Xie et al. (2017) is then used as large backbone and it boosts 6 mAP against ResNet50. DCN and More Points Train. We adopt more interpolated points during training, by increasing the number of sampled points from original 14 to 26 for coarse prediction head, and from 14 to 24 for fine-grained point head. Then by adopting DCN Dai et al. (2017), we gain 71.6 mAP, which already outperforms HTC and SOLOV2 from our offline observation. Large Resolution and P6 Feature. Due to PointRend’s lightweight segmentation head and less memory consumption compared to HTC, the input resolution can be further increased from range [800,1000] to [1200,1400] during multi-scale training. P6 level of FPN is also added for both coarse prediction head and fine-grained point head, which finally yields 74.3 mAP on our splitted validation set. Other tricks we tried on PointRend give little improvement, including MaskScoring head, GC Block and DoubleHead Wu et al. (2020).
D
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
Here we give an embarrassingly simple presentation of an example of such a function (although it can be shown to be a version of the example in the previous version of this note). As was written in the previous version, an anonymous referee of version 1 wrote that the theorem was known to experts but not published. Maybe the presentation below is what was known.
(0⁢log⁡0:=0assign0000\log 0:=00 roman_log 0 := 0). The base of the log\logroman_log does not really matter here. For concreteness we take the log\logroman_log to base 2222. Note that if f𝑓fitalic_f has L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT norm 1111 then the sequence {|f^⁢(A)|2}A⊆[n]subscriptsuperscript^𝑓𝐴2𝐴delimited-[]𝑛\{|\hat{f}(A)|^{2}\}_{A\subseteq[n]}{ | over^ start_ARG italic_f end_ARG ( italic_A ) | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_A ⊆ [ italic_n ] end_POSTSUBSCRIPT sums up to 1111 and thus this is the usual definition of entropy of this probability distribution.
For the significance of this conjecture we refer to the original paper [FK], and to Kalai’s blog [K] (embedded in Tao’s blog) which reports on all significant results concerning the conjecture. [KKLMS] establishes a weaker version of the conjecture. Its introduction is also a good source of information on the problem.
In version 1 of this note, which can still be found on the ArXiv, we showed that the analogous version of the conjecture for complex functions on {−1,1}nsuperscript11𝑛\{-1,1\}^{n}{ - 1 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT which have modulus 1111 fails. This solves a question raised by Gady Kozma some time ago (see [K], comment from April 2, 2011). More specifically, we proved
D
We conduct numerical experiments on synthetic nonstationary linear MDPs to demonstrate the effectiveness of our proposed algorithms.
To make the environment challenging for exploration, our construction falls into the category of combination lock (Koenig & Simmons, 1993). For each of these 5 linear MDPs, there is only one good (and different) chain that contains a huge reward at the end, but 0 reward for the rest of the chain. Further, any sub-optimal action has small positive rewards that would attract the agent to depart from the optimal route. Therefore, the agent must perform “deep exploration” (Osband et al., 2019) to obtain near-optimal policy. The details of the constructions are in Appendix E. Here we report the cumulative rewards and the running time of all algorithms averaged over 10 trials.
However, all of the aforementioned empirical and theoretical works on RL with function approximation assume the environment is stationary, which is insufficient to model problems with time-varying dynamics. For example, consider online advertising. The instantaneous reward is the payoff when viewers are redirected to an advertiser, and the state is defined as the the details of the advertisement and user contexts. If the target users’ preferences are time-varying, time-invariant reward and transition function are unable to capture the dynamics. In general nonstationary random processes naturally occur in many settings and are able to characterize larger classes of problems of interest (Cover & Pombra, 1989). Can one design a theoretically sound algorithm for large-scale nonstationary MDPs? In general it is impossible to design algorithm to achieve sublinear regret for MDPs with non-oblivious adversarial reward and transition functions in the worst case (Yu et al., 2009). Then what is the maximum nonstationarity a learner can tolerate to adapt to the time-varying dynamics of an MDP with potentially infinite number of states? This paper addresses these two questions.
We consider the setting of episodic RL with nonstationary reward and transition functions. To measure the performance of an algorithm, we use the notion of dynamic regret, the performance difference between an algorithm and the set of policies optimal for individual episodes in hindsight. For nonstationary RL, dynamic regret is a stronger and more appropriate notion of performance measure than static regret, but is also more challenging for algorithm design and analysis. To incorporate function approximation, we focus on a subclass of MDPs in which the reward and transition dynamics are linear in a known feature map (Melo & Ribeiro, 2007), termed linear MDP. For any linear MDP, the value function of any policy is linear in the known feature map since the Bellman equation is linear in reward and transition dynamics (Jin et al., 2020). Since the optimal policy is greedy with respect to the optimal value function, linear function approximation suffices to learn the optimal policy. For nonstationary linear MDPs, we show that one can design a near-optimal statistically-efficient algorithm to achieve sublinear dynamic regret as long as the total variation of reward and transition dynamics is sublinear. Let T𝑇Titalic_T be the total number of time steps, B𝐵Bitalic_B be the total variation of reward and transition function throughout the entire time horizion, d𝑑ditalic_d be the ambient dimension of the features, and H𝐻Hitalic_H be the planning horizon.
Bandit problems can be viewed as a special case of MDP problems with unit planning horizon. It is the simplest model that captures the exploration-exploitation tradeoff, a unique feature of sequential decision-making problems. There are several ways to define nonstationarity in the bandit literature. The first one is piecewise-stationary (Garivier & Moulines, 2011), which assumes the expected rewards of arms change in a piecewise manner, i.e., stay fixed for a time period and abruptly change at unknown time steps. The second one is to quantify the total variations of expected rewards of arms (Besbes et al., 2014). The general strategy to adapt to nonstationarity
D
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by political and financial gains, and its influence has led to increasing social costs due to the adverse effects it has on people’s truth discernment and behavior (Duffy et al., 2020). With fake news stemming mainly from digital media and causing misguided dissent that could compromise collaboration among people, we see this to be of concern to the CSCW community. As global efforts addressing fake news take off, we aim to understand what the perceptions and practices of news sharing and fake news are in a local context, with Singapore as the place of interest, to gain insights on where best to direct local mitigation efforts.
Singapore is a city-state with an open economy and diverse population that shapes it to be an attractive and vulnerable target for fake news campaigns (Lim, 2019). As a measure against fake news, the Protection from Online Falsehoods and Manipulation Act (POFMA) was passed on May 8, 2019, to empower the Singapore Government to more directly address falsehoods that hurt the public interest. The rising attention of fake news in the local scene has motivated various research including studies on the perceptions and motivations of fake news sharing (Chen et al., 2015) and responses to fake news (Edson C Tandoc et al., 2020). Although there are parallels between these studies and ours, we want to highlight that our study explores fake news in general media instead of solely social media, examining both usage and trust. Furthermore, we investigate more broadly the attitudes and behaviors on news sharing and fake news.
Many studies worldwide have observed the proliferation of fake news on social media and instant messaging apps, with social media being the more commonly studied medium. In Singapore, however, mitigation efforts on fake news in instant messaging apps may be more important. Most respondents encountered fake news on instant messaging apps compared to social media, and have reported the least trust in them. They have also rated the sharing of fake news to be a greater problem than its creation. These suggest that, in Singapore, communication with personal contacts such as through the forwarding of messages, rather than with the public such as by sharing posts on social media feeds, is the larger issue. As an Asian country, Singapore tends towards a collectivist culture where emphasis is placed on establishing and maintaining relationships in one’s social group. Research has shown that this is linked to lesser use of social media (Jackson and Wang, 2013), and stronger preferences towards group chats in instant messaging apps (Li et al., 2011), signaling that instant messaging apps feature more prominently in daily communication. An opportunity here is to design more effective interventions, such as warning mechanisms (Gao et al., 2018), to preempt the private sharing of fake news.
While fake news is not a new phenomenon, the 2016 US presidential election brought the issue to immediate global attention with the discovery that fake news campaigns on social media had been made to influence the election (Allcott and Gentzkow, 2017). The creation and dissemination of fake news is motivated by political and financial gains, and its influence has led to increasing social costs due to the adverse effects it has on people’s truth discernment and behavior (Duffy et al., 2020). With fake news stemming mainly from digital media and causing misguided dissent that could compromise collaboration among people, we see this to be of concern to the CSCW community. As global efforts addressing fake news take off, we aim to understand what the perceptions and practices of news sharing and fake news are in a local context, with Singapore as the place of interest, to gain insights on where best to direct local mitigation efforts.
Fake news is news articles that are “either wholly false or containing deliberately misleading elements incorporated within its content or context” (Bakir and McStay, 2018). The presence of fake news has become more prolific on the Internet due to the ease of production and dissemination of information online (Shu et al., 2017). The usage of fake news ranges from self-serving purposes like clickbait for moneymaking (Geçkil et al., 2018) to agendas on a national scale like political manipulation (Allcott and Gentzkow, 2017) and terrorism (Fang, 2021). With the rapid and extensive adoption of social platforms, fake news has come to be more closely integrated with daily life, resulting in rising social costs due to people making poorly justified and unwarranted choices based on inaccurate knowledge (Duffy et al., 2020). This has spurred CSCW research on areas like attitudes towards news (Wang and Mark, 2013), news transmission (Liao and Shi, 2013), and forms of innovative countermeasures (Bhuiyan et al., 2018; Mitra et al., 2017), revealing the breadth of interests in this issue.
D
Our method represents a standard KG embedding approach capable of generating embeddings for various tasks. This distinguishes it from most inductive methods that either cannot produce entity embeddings [22, 23, 25], or have entity embeddings conditioned on specific relations/entities [20, 21]. While some methods attempt to address entity alignment by introducing a new relation, the results often demonstrate poor performance, as evidenced in [2, 27].
Our method represents a standard KG embedding approach capable of generating embeddings for various tasks. This distinguishes it from most inductive methods that either cannot produce entity embeddings [22, 23, 25], or have entity embeddings conditioned on specific relations/entities [20, 21]. While some methods attempt to address entity alignment by introducing a new relation, the results often demonstrate poor performance, as evidenced in [2, 27].
We conduct experiments to explore the impact of the numbers of unseen entities on the performance in open-world entity alignment. We present the results on the ZH-EN datasets in Figure 6. Clearly, the performance gain achieved by leveraging our method significantly increases when there are more unseen entities. For example, when only 20% of entities are unseen, decentRL outperforms AliNet on Hits@1 by 9.2%, while this margin extends to 35.9% when 80% of entities are unseen. Overall, decentRL demonstrates significant advantages as new entities are added to KGs.
In this work, we propose Decentralized Attention Network for knowledge graph embedding and introduce self-distillation to enhance its ability to generate desired embeddings for both known and unknown entities. We provide theoretical justification for the effectiveness of our proposed learning paradigm and conduct comprehensive experiments to evaluate its performance on entity alignment and entity prediction, considering scenarios with and without new entities. Our experimental results demonstrate state-of-the-art performance of the proposed method on conventional and open-world benchmarks for both entity alignment and entity prediction tasks. Our method not only provides a solution for knowledge graph representation learning but also offers valuable insights into the potential of decentralized attention mechanisms for other graph-based applications.
Unlike many inductive methods that are solely evaluated on datasets with unseen entities, our method aims to produce high-quality embeddings for both seen and unseen entities across various downstream tasks. To our knowledge, decentRL is the first method capable of generating high-quality embeddings for different downstream tasks on datasets that encompass both existing and new entities.
D
In this section, we conduct experiments to compare the proposed VDM with several state-of-the-art model-based self-supervised exploration approaches. We first describe the experimental setup and implementation detail. Then, we compare the proposed method with baselines in several challenging image-based RL tasks. The code and video are available at https://sites.google.com/view/exploration-vdm.
We evaluate the proposed method on several challenging image-based tasks from OpenAI Gym222http://gym.openai.com/ and Retro333https://retro.readthedocs.io, including
We demonstrate the setup of the experiment in Fig. 10. The equipment mainly includes an RGB-D camera that provides the image-based observations, a UR5 robot arm that interacts with the environment, and different objects in front of the robot arm. An example of the RGB-D image is shown in Fig. 11. We develop a robot environment based on OpenAI Gym to provide the interface for the RL algorithm. We connect a GPU workstation, the robot arm, and a camera through TCP protocol. The PPO algorithm and VDM are running on the GPU workstation. During the training, the samples collected by the camera are sent to the GPU workstation, and the policy commands generated by the policy are sent to the robot arm to execute. We stack the RGB-D data and resize it to 84×84×48484484\times 84\times 484 × 84 × 4 pixels as the input state in RL. The arm moves according to the position control of a vertically-oriented gripper. We represent the continuous actions by a Cartesian displacement [d⁢x,d⁢y,d⁢z,d⁢ω]𝑑𝑥𝑑𝑦𝑑𝑧𝑑𝜔[dx,dy,dz,d\omega][ italic_d italic_x , italic_d italic_y , italic_d italic_z , italic_d italic_ω ], where ω𝜔\omegaitalic_ω is the rotation of the wrist around the z-axis. The output of the policy is a Gaussian distribution. We do no use gripper in our experiment and keep the gripper open in training. Each training episode contains a maximum of 100100100100 time steps of interaction. An episode terminates when the experiment exceeds the maximal length of 100100100100 time steps, or the robot arm pushes all objects out of the workspace.
In this section, we conduct experiments to compare the proposed VDM with several state-of-the-art model-based self-supervised exploration approaches. We first describe the experimental setup and implementation detail. Then, we compare the proposed method with baselines in several challenging image-based RL tasks. The code and video are available at https://sites.google.com/view/exploration-vdm.
Upon fitting VDM, we propose an intrinsic reward by an upper bound of the negative log-likelihood, and conduct self-supervised exploration based on the proposed intrinsic reward. We evaluate the proposed method on several challenging image-based tasks, including 1) Atari games, 2) Atari games with sticky actions, which adds more stochasticity in the environment, 3) Super Mario, which we utilize to evaluate the adaptability of VDM to the novel environments, 4) a Multi-player game, which has two controllable agents against each other, and 5) a real robotic manipulating task, which we utilize to evaluate our method in real application scenarios. Experiments demonstrate that VDM outperforms several state-of-the-art dynamics-based self-supervised exploration approaches.
A
To do so, we sample 100 randomly nodes P⊆Ω𝑃ΩP\subseteq\Omegaitalic_P ⊆ roman_Ω, |P|=100𝑃100|P|=100| italic_P | = 100, independently generated for each degree, but identical for all methods and determine maxq∈P⁡|f⁢(q)−Qf⁢(q)|≈‖f−Qf‖C0⁢(Ω)subscript𝑞𝑃𝑓𝑞subscript𝑄𝑓𝑞subscriptnorm𝑓subscript𝑄𝑓superscript𝐶0Ω\max_{q\in P}|f(q)-Q_{f}(q)|\approx\|f-Q_{f}\|_{C^{0}(\Omega)}roman_max start_POSTSUBSCRIPT italic_q ∈ italic_P end_POSTSUBSCRIPT | italic_f ( italic_q ) - italic_Q start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT ( italic_q ) | ≈ ∥ italic_f - italic_Q start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT italic_C start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT ( roman_Ω ) end_POSTSUBSCRIPT .
Chebfun, and MIP are the only methods that converge down to machine precision (32-bit double-precision arithmetics). The convergence rate is as stated in
However, this does not mean that efficient algorithms to evaluate the resulting interpolants to machine precision are known.
The error bound in Eq. (1.4) only guarantees a polynomial convergence rate, but no exponential convergence;
Consequently, as we demonstrate in Section 8, this allows approximating highly varying functions, such as the Runge function, to machine precision.
A
[31, 6] find the worst-case direction that maximizes the Wasserstein distance between projected sample points in one-dimension.
Recently, [32, 33, 34] naturally extend this idea by projecting data points into a k𝑘kitalic_k-dimensional linear subspace with k>1𝑘1k>1italic_k > 1 such that the 2222-Wasserstein distance after projection is maximized.
In contrast, the power of the PW test decreases slower since it operates by projecting high-dimensional data points into a low-dimensional subspace.
It is intuitive to understand the differences between two collections of high-dimensional samples by projecting those samples into low-dimensional spaces in some proper directions [29, 30, 31, 6, 32, 33, 34].
The max-sliced Wasserstein distance is proposed to address this issue by finding the worst-case one-dimensional projection mapping such that the Wasserstein distance between projected distributions is maximized.
A
VAE-type DGMs use amortized variational inference to learn an approximate posterior qϕ⁢(H|x)subscript𝑞italic-ϕconditional𝐻𝑥q_{\phi}(H|x)italic_q start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_H | italic_x ) by maximizing an evidence lowerbound (ELBO) to the log-marginal likelihood of the data under the model pθ⁢(X)subscript𝑝𝜃𝑋p_{\theta}(X)italic_p start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_X ).
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervised, semi-supervised or unsupervised. In the Appendix we present such implementations. where we significantly constrain the capacity of the learned representation and heavily regularize the model to produce independent factors. As we explained above, such a model will likely learn a good disentangled representation, however, its reconstruction will be of low quality as it will only be able to generate the information captured by the disentangled factors while averaging the details. For example, in Figure 1, the model uses β𝛽\betaitalic_β-TCVAE [mig] to retrieve the pose of the model as a latent factor. In the reconstruction, the rest of the details are averaged, resulting in a blurry image (1b). The goal of the second part of the model, is to add the details while maintaining the semantic information retrieved in the first stage. In Figure 1 that means to transform Image 1b (the output of the first stage) to be as similar as possible to Image 1a (the target observation). We can view this as a style transfer task and use a technique from [adaIN] to achieve our goal.
Prior work in unsupervised DR learning suggests the objective of learning statistically independent factors of the latent space as means to obtain DR. The underlying assumption is that the latent variables H𝐻Hitalic_H can be partitioned into independent components C𝐶Citalic_C (i.e. the disentangled factors) and correlated components Z𝑍Zitalic_Z, a.k.a as nuisance variables, which encode the details information not stored in the independent components. A series of works starting from [beta] aims to achieve that via regularizing the models by up-weighting certain terms in the ELBO formulation which penalize the (aggregate) posterior to be factorized over all or some of the latent dimensions [kumar2017variational, factor, mig].
Amortization of the inference is achieved by parameterising the variational posterior with another deep neural network (called the encoder or the inference network) that outputs the variational posterior parameters as a function of X𝑋Xitalic_X. Thus, after jointly training the encoder and decoder, a VAE model can perform two complementary tasks: extract a low dimensional representation of a given observation x𝑥xitalic_x as well as reconstruct an observation from its low dimensional representation.
Deep generative models (DGMs) such as variational autoencoders (VAEs) [dayan1995helmholtz, vae, rezende2014stochastic] and generative adversarial networks (GANs) [gan] have enjoyed great success at modeling high dimensional data such as natural images. As the name suggests, DGMs leverage deep learning to model a data generating process. These models work on the underlying assumption that the high dimensional observations X∈ℝD𝑋superscriptℝ𝐷X\in\mathbb{R}^{D}italic_X ∈ blackboard_R start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT can be meaningfully described by a small set of low-dimensional latent factors H∈ℝK𝐻superscriptℝ𝐾H\in\mathbb{R}^{K}italic_H ∈ blackboard_R start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT, where K<D𝐾𝐷K<Ditalic_K < italic_D. More precisely, the observation (X=x)𝑋𝑥(X=x)( italic_X = italic_x ) is assumed to be generated by first sampling a set of low dimensional factors hℎhitalic_h from a simple prior distribution p⁢(H)𝑝𝐻p(H)italic_p ( italic_H ) and then sampling x∼pθ⁢(X|h)similar-to𝑥subscript𝑝𝜃conditional𝑋ℎx\sim p_{\theta}(X|h)italic_x ∼ italic_p start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_X | italic_h ). DGMs realize pθsubscript𝑝𝜃p_{\theta}italic_p start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT through a deep neural network also known as the decoder or the generative network.
C
We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Table. 1. The result of moving from the K2 peak to the K1 peak is the same as that of the XNOR, and the result of moving from the K2 peak to the K3 peak is the same as that of the XOR, it is possible to confirm that this study is feasible.
Exploration based on previous experiments and graph theory found errors in structural computers with electricity as a medium. The cause of these errors is the basic nature of electric charges: ‘flowing from high potential to low’. In short, the direction of current, which is the flow of electricity, is determined only by the height of the potential, not by the structure or shape of the circuit.
We will look at the inputs through 18 test cases to see if the circuit is acceptable. Next, it verifies with DFS that the output is possible for the actual pin connection state. As mentioned above, the search is carried out and the results are expressed by the unique number of each vertex. The result is as shown in Table. 1. The result of moving from the K2 peak to the K1 peak is the same as that of the XNOR, and the result of moving from the K2 peak to the K3 peak is the same as that of the XOR, it is possible to confirm that this study is feasible.
To simulate the aforementioned structural computer theory, a device in the form of a USB connection. However, as the circuit grows in size, a number of USB-connected simulation devices are required, resulting in cost problems. We decided to verify that the structural computer theory presented so far is actually working without the cost of circuit building, to simulate the connection of complex Circuits rather than just Gate Circuit, and to set up Metric for experiments that can test structural computers for logical errors and error.
However, this circuit can confirm that circuit discovery errors occur in Y-shaped grinding (C3 to G3, D3 to G3 / E1 to H1 / I1 to K3, J1 to K3) because the electricity is unconditionally moving to low potential.
D
Hence any function xnsuperscript𝑥𝑛x^{n}italic_x start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT with g⁢c⁢d⁢(n,q−1)≠1𝑔𝑐𝑑𝑛𝑞11gcd(n,q-1)\neq 1italic_g italic_c italic_d ( italic_n , italic_q - 1 ) ≠ 1, under the action of 𝐊𝐊\mathbf{K}bold_K settles down to the function xq−1superscript𝑥𝑞1x^{q-1}italic_x start_POSTSUPERSCRIPT italic_q - 1 end_POSTSUPERSCRIPT. Further m𝑚mitalic_m is the least such integer such that nm⁢m⁢o⁢d⁢q−1=0superscript𝑛𝑚𝑚𝑜𝑑𝑞10n^{m}\ mod\ q-1=0italic_n start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT italic_m italic_o italic_d italic_q - 1 = 0 as any smaller m1subscript𝑚1m_{1}italic_m start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT such that xnm1=xq−1superscript𝑥superscript𝑛subscript𝑚1superscript𝑥𝑞1x^{n^{m_{1}}}=x^{q-1}italic_x start_POSTSUPERSCRIPT italic_n start_POSTSUPERSCRIPT italic_m start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT = italic_x start_POSTSUPERSCRIPT italic_q - 1 end_POSTSUPERSCRIPT is a contradiction to the assumption that m𝑚mitalic_m is the index of nilpotence of n𝑛nitalic_n in the nilradical of ℤq−1subscriptℤ𝑞1\mathbb{Z}_{q-1}blackboard_Z start_POSTSUBSCRIPT italic_q - 1 end_POSTSUBSCRIPT
In this section, we aim to compute the possible cycle lengths of the PP through the linear representation defined in (10). As discussed in Section 1.3, given a polynomial f⁢(x)𝑓𝑥f(x)italic_f ( italic_x ), we associate a dynamical system through a difference equation of the form
In this section, we provide examples of estimating the possible orbit lengths of permutation polynomials in the form of Dickson polynomials Dn⁢(x,α)subscript𝐷𝑛𝑥𝛼D_{n}(x,\alpha)italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x , italic_α ) [10] of degree n𝑛nitalic_n through the linear representation approach. The Dickson polynomial Dn⁢(x,α)subscript𝐷𝑛𝑥𝛼D_{n}(x,\alpha)italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_x , italic_α ) is of the form
The work [19] also provides a computational framework to compute the cycle structure of the permutation polynomial f𝑓fitalic_f by constructing a matrix A⁢(f)𝐴𝑓A(f)italic_A ( italic_f ), of dimension q×q𝑞𝑞q\times qitalic_q × italic_q through the coefficients of the (algebraic) powers of fksuperscript𝑓𝑘f^{k}italic_f start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT, k=0,1,…,q−1𝑘01…𝑞1k=0,1,\dots,q-1italic_k = 0 , 1 , … , italic_q - 1 and computing the multiplicative order of the eigenvalues of this matrix A⁢(f)𝐴𝑓A(f)italic_A ( italic_f ) over a suitable field extension. In our work, to compute the cycle structure of the permutation polynomial, we have to compute the solutions of the associated linear dynamical system (19). This computation amounts to computing the multiplicative order of the eigenvalues of the matrix M𝑀Mitalic_M over a suitable field extension [24]. From the table, we see that the dimension of the matrix M𝑀Mitalic_M, which is used to compute the cycle lengths, is not necessarily q𝑞qitalic_q. Hence, this approach does not necessarily involve matrices of dimension q𝑞qitalic_q in all cases.
The paper is organized as follows. Section 2 focuses on linear representation for maps over finite fields 𝔽𝔽\mathbb{F}blackboard_F, develops conditions for invertibility, computes the compositional inverse of such maps and estimates the cycle structure of permutation polynomials. In Section 3, this linear representation is extended to a family of parametric maps, studying its invertibility and computation of the parametric inverse. The extension of the theory of linear representation to multivariate maps (maps over 𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT) is discussed in Section 4 and finally, a linear representation of the group generated by a finite set of invertible maps over 𝔽nsuperscript𝔽𝑛\mathbb{F}^{n}blackboard_F start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT is addressed in Section 5.
A
In this article we investigated how different view-selecting meta-learners affect the performance of multi-view stacking. In our simulations, the interpolating predictor often performed worse than the other meta-learners on at least one outcome measure. For example, when the sample size was larger than the number of views, the interpolating predictor often had the lowest TPR in view selection, as well as the lowest test accuracy, particularly when there was no correlation between the different views. When the sample size was smaller than the number of views, the interpolating predictor had a FPR in view selection that was considerably higher than that of all other meta-learners. In terms of accuracy it performed very well in the breast cancer data, but less so in the colitis data. However, in both cases it produced very dense models, which additionally had low view selection stability. The fact that its behavior varied considerably across our experimental conditions, combined with its tendency to select very dense models when the meta-learning problem is high-dimensional, suggests that the interpolating predictor should not be used when view selection is among the goals of the study under consideration. However, it may have some use when its interpretation as a weighted mean of the view-specific models is of particular importance.
Excluding the interpolating predictor, nonnegative ridge regression produced the least sparse models. This is not surprising considering it performs view selection only through its nonnegativity constraints. Its high FPR in view selection appeared to negatively influence its test accuracy, as there was generally at least one sparser model with better accuracy in both our simulations and real data examples. Although nonnegative ridge regression shows that the nonnegativy constrains alone already cause many coefficients to be set to zero, if one assumes the true underlying model to be sparse, one should probably choose one of the meta-learners specifically aimed at view selection.
In this article we investigated how different view-selecting meta-learners affect the performance of multi-view stacking. In our simulations, the interpolating predictor often performed worse than the other meta-learners on at least one outcome measure. For example, when the sample size was larger than the number of views, the interpolating predictor often had the lowest TPR in view selection, as well as the lowest test accuracy, particularly when there was no correlation between the different views. When the sample size was smaller than the number of views, the interpolating predictor had a FPR in view selection that was considerably higher than that of all other meta-learners. In terms of accuracy it performed very well in the breast cancer data, but less so in the colitis data. However, in both cases it produced very dense models, which additionally had low view selection stability. The fact that its behavior varied considerably across our experimental conditions, combined with its tendency to select very dense models when the meta-learning problem is high-dimensional, suggests that the interpolating predictor should not be used when view selection is among the goals of the study under consideration. However, it may have some use when its interpretation as a weighted mean of the view-specific models is of particular importance.
The nonnegative elastic net, with its additional L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT penalty compared with ridge regression, is one such method. In our simulations it produced sparser models than nonnegative ridge regression, usually with better or comparable accuracy. These sparser models were associated with a reduction in FPR and FDR, but in some setting also with a reduction in TPR, particularly when there are correlations between the views. However, we fixed the mixing parameter α𝛼\alphaitalic_α at 0.5 to observe a specific setting in between ridge regression and the lasso. In practice, one can tune α𝛼\alphaitalic_α, for example through cross-validation. This may allow the elastic net to better adapt to different correlation structures. In the colitis data, the elastic net performed better than nonnegative ridge regression in terms of test accuracy, whereas in the breast cancer data it performed slightly worse. However, in both cases it produced much sparser models, demonstrating its use in view selection.
In this article we investigate how the choice of meta-learner affects the view selection and classification performance of MVS. We compare the following meta-learners: (1) the interpolating predictor of Breiman (\APACyear1996), (2) nonnegative ridge regression (Hoerl \BBA Kennard, \APACyear1970; Le Cessie \BBA Van Houwelingen, \APACyear1992), (3) the nonnegative elastic net (Zou \BBA Hastie, \APACyear2005), (4) the nonnegative lasso (Tibshirani, \APACyear1996), (5) the nonnegative adaptive lasso (Zou, \APACyear2006), (6) stability selection with the nonnegative lasso (Hofner \BOthers., \APACyear2015), and (7) nonnegative forward selection. All of these meta-learners provide models with nonnegative coefficients. In addition, they can all set some coefficients to zero, thus potentially obtaining sparse models and performing view selection. Although not an exhaustive comparison of all possible meta-learners, six of these are popular feature selection methods in their own right, and would most likely end up high on many researchers’ list of candidate meta-learners. A likely exception to this is nonnegative ridge regression, since ridge regression without nonnegativity constraints would not set any coefficients to zero. However, this method is included because it provides an indication of the view selection effect of just the addition of nonnegativity constraints on the meta-learner. Each of the seven candidate meta-learners is described in more detail below.
A
Table 8: p𝑝pitalic_p-values of Wilcoxon Signed Ranks Test on DepAD algorithms paired with the benchmark methods.
Wilcoxon signed ranks tests are conducted on the results of each of the two DepAD algorithms, i.e., FBED-CART-PS and FBED-CART-Sum, pairwise with each of the nine benchmark methods. The alternative hypothesis is that a DepAD algorithm is better than the comparison method. The p𝑝pitalic_p-values are shown in Table 8, where * indicates that the p𝑝pitalic_p-value is less than 0.05.
Effectiveness: The two DepAD algorithms, FBED-CART-PS, and FBED-CART-Sum, demonstrate superior performance over nine state-of-the-art anomaly detection methods in the majority of cases. The two DepAD methods do not outperform wkNN. However, they show advantages over wkNN in higher dimensional datasets in terms of both ROC AUC and AP.
According to Figure 7 and Table 8, the two DepAD algorithms are significantly better than all benchmark methods except for wkNN and iForest in terms of ROC AUC . With wkNN, the results are similar. With iForest, the p𝑝pitalic_p-values are very close to 0.05. In terms of AP, the two DepAD algorithms yield significantly better results than all benchmark methods except for wKNN, iForest and COMBN, as shown in Figure 8 and Table 8. With wkNN, the p𝑝pitalic_p-value is around 0.5, which shows a similar performance. The p𝑝pitalic_p-values with iForest and COMBN are close to 0.05. Furthermore, the two DepAD methods significantly outperform ALSO, and this is attributed to the inclusion of the relevant variable selection. In summary, the two DepAD algorithms outperform most of the benchmark methods, including both proximity-based methods and existing dependency-based methods.
In the subsection, we answer the question, i.e., compared with state-of-the-art anomaly detection methods, how is the performance of the instantiated DepAD algorithms? We choose the two DepAD algorithms, FBED-CART-PS and FBED-CART-Sum, to compare them with the nine state-of-the-art anomaly detection methods shown in Table 7, including seven proximity-based methods and two dependency-based methods. The settings of these methods can be found in Table 7.
C
In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of uncertainty approaches)  [Abbasi-Yadkori et al., 2011, Abeille et al., 2021]. We use Bernstein-style concentration for self-normalized martingales, which were previously proposed in the context of scalar logistic bandits in Faury et al. [2020], to define our confidence set over the true parameter, taking into account the effects of the local curvature of the reward function. We show that the performance of CB-MNL (as measured by regret) is bounded as O~⁢\del⁢d⁢T+κ~O\del𝑑𝑇𝜅\tilde{\mathrm{O}}\del{d\sqrt{T}+\kappa}over~ start_ARG roman_O end_ARG italic_d square-root start_ARG italic_T end_ARG + italic_κ, significantly improving the theoretical performance over existing algorithms where κ𝜅\kappaitalic_κ appears as a multiplicative factor in the leading term. We also leverage a self-concordance [Bach, 2010] like relation for the multinomial logit reward function [Zhang & Lin, 2015], which helps us limit the effect of κ𝜅\kappaitalic_κ on the final regret upper bound to only the higher-order terms. Finally, we propose a different convex confidence set for the optimization problem in the decision set of CB-MNL, which reduces the optimization problem to a constrained convex problem.
choice model for capturing consumer purchase behavior in assortment selection models (see Flores et al. [2019] and Avadhanula [2019]). Recently, large-scale field experiments at Alibaba [Feldman et al., 2018] have demonstrated the efficacy of the MNL model in boosting revenues. Rusmevichientong et al. [2010] and Sauré & Zeevi [2013] were a couple of early works that studied explore-then-commit strategies for the dynamic assortment selection problem under the MNL model when there are no contexts/product features. The works of Agrawal et al. [2019] and Agrawal et al. [2017] revisited this problem and presented adaptive online learning algorithms based on the Upper Confidence Bounds(UCB) and Thompson Sampling (TS) ideas. These approaches, unlike earlier ideas, did not require prior information about the problem parameters and had near-optimal regret bounds. Following these developments, the contextual variant of the problem has received considerable attention. Cheung & Simchi-Levi [2017] and Oh & Iyengar [2019] propose TS-based approaches and establish Bayesian regret bounds on their performance333Our results give worst-case regret bound which is strictly stronger than Bayesian regret bound. Worst-case regret bounds directly imply Bayesian regret bounds with same order dependence.. Chen et al. [2020] present a UCB-based algorithm and establish min-max regret bounds. However, these contextual MNL algorithms and their performance bounds depend on a problem parameter κ𝜅\kappaitalic_κ that can be prohibitively large, even for simple real-life examples. See Figure 1 for an illustration and Section 1.2 for a detailed discussion.
Our result is still O⁢(d)O𝑑\mathrm{O}(\sqrt{d})roman_O ( square-root start_ARG italic_d end_ARG ) away from the minimax lower of bound Chu et al. [2011] known for the linear contextual bandit. In the case of logistic bandits, Li et al. [2017] makes an i.i.d. assumption on the contexts to bridge the gap (however, they still retain the κ𝜅\kappaitalic_κ factor). Improving the worst-case regret bound by O⁢(d)O𝑑\mathrm{O}(\sqrt{d})roman_O ( square-root start_ARG italic_d end_ARG ) while keeping κ𝜅\kappaitalic_κ as an additive term is an open problem. It may be possible to improve the dependence on κ𝜅\kappaitalic_κ by using a higher-order approximation for estimation error. Finding a lower bound on dependence κ𝜅\kappaitalic_κ is an interesting open problem and may require newer techniques than presented in this work.
In summary, our work establishes strong worst-case regret guarantees by carefully accounting for local gradient information and using second-order function approximation for the estimation error.
where pessimism is the additive inverse of the optimism (difference between the payoffs under true parameters and those estimated by CB-MNL). Due to optimistic decision-making and the fact that θ∗∈Ct⁢(δ)subscript𝜃subscript𝐶𝑡𝛿\theta_{*}\in C_{t}(\delta)italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ∈ italic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ) (see Eq (12)), pessimism is non-positive, for all rounds. Thus, the regret is upper bounded by the sum of the prediction error for T𝑇Titalic_T rounds. In Section 4.1 we derive an the expression for prediction error upper bound for a single round t𝑡titalic_t. We also contrast with the previous works Filippi et al. [2010], Li et al. [2017], Oh & Iyengar [2021] and point out specific technical differences which allow us to use Bernstein-like tail concentration inequality and therefore, achieve stronger regret guarantees. In Section 4.2, we describe the additional steps leading to the statement of Theorem 1. The style of the arguments is simpler and shorter than that in Faury et al. [2020]. Finally, in Section 4.3, we discuss the relationship between two confidence sets Ct⁢(δ)subscript𝐶𝑡𝛿C_{t}(\delta)italic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ) and Et⁢(δ)subscript𝐸𝑡𝛿E_{t}(\delta)italic_E start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ) and show that even using Et⁢(δ)subscript𝐸𝑡𝛿E_{t}(\delta)italic_E start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ) in place of Ct⁢(δ)subscript𝐶𝑡𝛿C_{t}(\delta)italic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ), we get the regret upper bounds with same parameter dependence as in Corollary 2.
C
Table 1: Action detection results on validation set of THUMOS-14, measured by mAP (%) at different tIoU thresholds. Our VSGN achieves the highest mAP at tIoU threshold 0.5 (commonly adopted criteria), significantly outperforming all other methods.
∗ Re-implementation with the same features as ours. We replace 3D convolutions with 1D convolutions to adapt to the feature dimension.
∗ Re-implementation with the same features as ours. We replace 3D convolutions with 1D convolutions to adapt to the feature dimension.
We compare the inference time of different methods on the ActivityNet validation set on a 1080ti GPU in Table 8. Compared to end-to-end frameworks such as PBRNet, the methods using pre-extracted features such as BMN, G-TAD and VSGN can re-use the features extracted for other tasks, and these methods do not introduce complex 3D convolutions in the TAL architecture, therefore, they have obviously lower inference time. Our VSGN has negligible computation in VSS, and has similar cost in xGPN to the GNNs in G-TAD. Addtionally, it uses fewer anchors (1240 vs 4950), and does not have the stage of ROIAlign, so it runs faster than G-TAD.
Cross-scale graph network. The xGN module contains a temporal branch to aggregate features in a temporal neighborhood, and a graph branch to aggregate features from intra-scale and cross-scale locations. Then it pools the aggregated features into a smaller temporal scale. Its architecture is illustrated in Fig. 4. The temporal branch contains a Conv1d⁢(3,1)Conv1d31\textrm{Conv1d}(3,1)Conv1d ( 3 , 1 )222For conciseness, we use Conv1d⁢(m,n)Conv1d𝑚𝑛\textrm{Conv1d}(m,n)Conv1d ( italic_m , italic_n ) to represent 1-D convolutions with kernel size m𝑚mitalic_m and stride n𝑛nitalic_n. layer. In the graph branch, we build a graph on all the features from both Clip O and Clip U, and apply edge convolutions [38] for feature aggregation.
A
(ii) in the next exploration phase, compare and choose specific ML algorithms for the ensemble and then proceed with their particular instantiations, i.e., the models (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(c–e));
R4: Contrast the results of all model-generation stages and update the majority-voting ensemble. In evolutionary optimization, a crossover and mutation phase leads to a propagation of more crossover and mutation phases with exponential growth (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(b)).
(iii) during the detailed examination phase, zoom in into interesting clusters already explored in the previous phase, and focus on indications that confirm either their approval in the ensemble or their need for transformation through the evolutionary process (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(f and g));
(ii) in the next exploration phase, compare and choose specific ML algorithms for the ensemble and then proceed with their particular instantiations, i.e., the models (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(c–e));
After another hyperparameter space search (see VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(d)) with the help of supporter views (VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(c, f, and g)), out of the 290 models generated in S2subscript𝑆2S_{2}italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, we select 28 to add to the ensemble (cf. VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization(e)).
B
This algorithm treats the spatial distribution of swarm agents, called the density distribution, as a probability distribution and employs the Metropolis-Hastings (M-H) algorithm to synthesize a Markov chain that guides the density distribution toward a desired state.
The probabilistic guidance algorithm led to the development of numerous Markov chain synthesis algorithms involving specific objectives and constraints [8, 9, 10, 11, 12, 13, 14, 15, 16, 17].
In this section, we apply the DSMC algorithm to the probabilistic swarm guidance problem and provide numerical simulations that show the convergence rate of the DSMC algorithm is considerably faster as compared to the previous Markov chain synthesis algorithms in [7] and [14].
The current literature covers a broad spectrum of methodologies for Markov chain synthesis, incorporating both heuristic approaches and optimization-based techniques [4, 5, 6]. Each method provides specialized algorithms tailored to the synthesis of Markov chains in alignment with specific objectives or constraints.
Markov chain synthesis plays a central role in probabilistic swarm guidance, which has led to the development of various algorithms incorporating additional transition and safety constraints [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17].
A
We use the registration subset with 10101010 poses for each class and downsample each shape to 2,00020002{,}0002 , 000 faces.
In contrast, HiPPI and our method require shape-to-universe representations. To obtain these, we use synchronisation to extract the shape-to-universe representation from the pairwise transformations. By doing so, we obtain the initial U𝑈Uitalic_U and Q𝑄Qitalic_Q. We refer to this method of synchronising the ZoomOut results as ZoomOut+Sync, which directly serves as initialisation for HiPPI and our method. Throughout this section we also report results of the initialisation methods ZoomOut and ZoomOut+Sync. Further details can be found in the supplementary material.
While the PCK curves between ours, ZoomOut+Sync and HiPPI in Fig. 2 are close, the AUC in Tab. 2 shows that our performance is still superior by a small margin. Qualitative results can be found in the supplementary material.
Partial functional maps are rectangular and low-rank [58], and this experiments shows that our method can also handle this more general case. More details can be found in the supplementary material.
Our method shows state-of-the-art results and surpasses all competitors on this dataset, see Fig. 2 and Tab. 2.
B
On the side of directed path graphs, at the state of art, our algorithm is the only one that does not use the results in [4], in which it is given a linear time algorithm able to establish whether a path graph is a directed path graph too (see Theorem 5 for further details). Thus, prior to this paper, it was necessary to implement two algorithms to recognize directed path graphs, while we obtain our recognition algorithm for directed path graphs by slightly modifying the recognition algorithm for path graphs.
The paper is organized as follows. In Section 2 we present the characterization of path graphs and directed path graphs given by Monma and Wei [18], while in Section 3 we explain the characterization of path graphs by Apollonio and Balzotti [1]. In Section 4 we present our recognition algorithm for path graphs, we prove its correctness, we report some implementation details and we compute its time complexity. Finally, in Section 5 we provide a similar analysis for directed path graphs.
On the side of directed path graphs, we first extend the characterization in [1] for path graphs to directed path graphs, and then we adapt the recognition algorithm for path graphs to directed path graphs, obtaining algorithm RecognizeDPG.
In this section we report the characterization of path graphs and directed path graphs described in [18]. We start with a formal definition of these classes of graphs.
In this way, we do not improve the time complexity but we unify and strictly simplify the study of path graphs and directed path graphs by the algorithmic point of view.
D
Conflict of interest/Competing interests (check journal-specific guidelines for which heading to use) None
We report the averaged mixed Hamming error rates for our methods and the other three competitors in Table 4. Mixed-SLIMτ⁢a⁢p⁢p⁢r⁢osubscriptSLIM𝜏𝑎𝑝𝑝𝑟𝑜\mathrm{SLIM}_{\tau appro}roman_SLIM start_POSTSUBSCRIPT italic_τ italic_a italic_p italic_p italic_r italic_o end_POSTSUBSCRIPT outperforms the other three Mixed-SLIM methods on all SNAP ego-networks and it significantly outperforms Mixed-SCORE, OCCAM, and GeoNMF on GooglePlus and Twitter networks. Mixed-SLIM methods have smaller averaged mixed Humming error rates than Mixed-SCORE, OCCAM, and GeoNMF on the GooglePlus networks and Twitter networks, while they perform slightly poorer than Mixed-SCORE on Facebook networks. Meanwhile, we also find that OCCAM and GeoNMF share similar performances on the ego-networks. It is interesting to find that the error rates on Twitter and GooglePlus networks are higher than error rates on Facebook which may be because Twitter and GooglePlus networks have a higher proportion of overlapping nodes than Facebook.
Authors’ contributions. Qing mainly worked on the algorithm and theoretical properties. Wang mainly worked on the algorithm and whole paper organization.
In this section, we first introduce the main algorithm mixed-SLIM which can be taken as a natural extension of the SLIM (SLIM, ) to the mixed membership community detection problem. Then we discuss the choice of some tuning parameters in the proposed algorithm.
http://www-personal.umich.edu/~mejn/netdata/. For the four datasets, the true labels are suggested by the original authors, and they are regarded as the “ground truth” to investigate the performances of Mixed-SLIM methods in this paper.
B
For instance, 𝒳𝒳\mathcal{X}caligraphic_X can be a torus 𝕋dsuperscript𝕋𝑑\mathbb{T}^{d}blackboard_T start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, which can be viewed as the d𝑑ditalic_d-dimensional hypercube [0,1)dsuperscript01𝑑[0,1)^{d}[ 0 , 1 ) start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT
To study optimization problems on the space of probability measures, we first introduce the background knowledge of the Riemannian manifold and the Wasserstein space. In addition, to analyze the statistical estimation problem that arises in estimating the Wasserstein gradient, we introduce the reproducing kernel Hilbert space.
We specialize to such a structure only for rigorous theoretical analysis, which also appears in other works involving the Wasserstein space (Gräf and Hielscher, 2015).
artifacts adopted only for theoretical analysis. We present the details of such a modified algorithm in Algorithm 2 in §A.
over the Wasserstein space 𝒫2⁢(𝒳)subscript𝒫2𝒳\mathcal{P}_{2}(\mathcal{X})caligraphic_P start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( caligraphic_X ). Such an optimization problem
B
To learn effective decentralized policies, there are two main challenges. Firstly, it is impractical to learn an individual policy for each intersection in a city or a district containing thousands of intersections. Parameter sharing may help. However, each intersection has a different traffic pattern, and a simple shared policy hardly learns and acts optimally at all intersections. To handle this challenge, we formulate the policy learning in a road network as a meta-learning problem, where traffic signal control at each intersection corresponds to a task, and a policy is learned to adapt to various tasks. Reward function and state transition of these tasks vary but share similarities since they follow the same traffic rules and have similar optimization goals. Therefore, we represent each task as a learned and low-dimensional latent variable obtained by encoding the past trajectory in each task. The latent variable is a part of the input of the policy, which captures task-specific information and helps improve the policy adaption.
may cause learning non-stationary because the agent may receive different rewards and observation transitions for the same action at the same observation. In this case, the received rewards and observation transitions of the current agent could not be well predicted only conditioned on its own observations and performed actions. Conversely, to avoid suffering such non-stationary, we hope the learned decentralized policy could make the observation transition and reward predictable. That is, based on the learned π⁢(ai|oi,t)𝜋conditionalsubscript𝑎𝑖subscript𝑜𝑖𝑡\pi(a_{i}|o_{i,t})italic_π ( italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | italic_o start_POSTSUBSCRIPT italic_i , italic_t end_POSTSUBSCRIPT ),
The observation-action history of agent i𝑖iitalic_i at time t𝑡titalic_t is denoted as τi,:tsubscript𝜏𝑖:absent𝑡\tau_{i,:t}italic_τ start_POSTSUBSCRIPT italic_i , : italic_t end_POSTSUBSCRIPT. ℛ={ℛi}i=1Nℛsuperscriptsubscriptsubscriptℛ𝑖𝑖1𝑁\mathcal{R}=\{\mathcal{R}_{i}\}_{i=1}^{N}caligraphic_R = { caligraphic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT is the reward for each agent. As stated in Sec. 3.3, the reward is calculated by the partial observation (queue length) and the observation transition may be unstable in a multi-agent system. That is, even if the agent performs the same action on the same observation at different timesteps, the agent may receive different observation transitions because neighbor agents may perform different actions. Hence, we define the reward function of each agent as ri,t=ℛi⁢(oi,t,ai,oi,t+1)subscript𝑟𝑖𝑡subscriptℛ𝑖subscript𝑜𝑖𝑡subscript𝑎𝑖subscript𝑜𝑖𝑡1r_{i,t}=\mathcal{R}_{i}(o_{i,t},a_{i},o_{i,t+1})italic_r start_POSTSUBSCRIPT italic_i , italic_t end_POSTSUBSCRIPT = caligraphic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_o start_POSTSUBSCRIPT italic_i , italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_o start_POSTSUBSCRIPT italic_i , italic_t + 1 end_POSTSUBSCRIPT ).
Before formulating the problem, we firstly design the learning paradigm by analyzing the characteristics of the traffic signal control (TSC). Due to the coordination among different signals, the most direct paradigm may be centralized learning. However, the global state information in TSC is not only highly redundant and difficult to obtain in realistic deployment, but also likely suffers from dimensional explosion. Moreover, once the policy function relies on the global state information or neighbors on execution, it is hard to transfer the policy from the training scenario to other unseen scenarios containing different road networks. Hence, it is natural to resort to the decentralized policy, which controls each signal only conditioned on its own history. However, the fully decentralized learning ignores the coordination. If agents are behaved independently, agents maximize their own rewards and may sacrifice the interests of others, it is difficult for the entire system to reach the optimum. Therefore, we model the task as Decentralized Partially Observable Markov Decision Process (Dec-POMDP) [67]. The neighbors’ information is considered, all agents’ policies are optimized synchronously in training, while only the agent’s observation history is used in the execution.
Secondly, even for a specific task, the received rewards and observations are uncertain to the agent, as illustrated in Fig. 1, which make the policy learning unstable and non-convergent. Even if the agent performs the same action on the same observation at different timesteps, the agent may receive different rewards and observation transitions because of neighbor agents’ different actions. In this case, the received rewards and observation transitions of the current agent could not be well predicted only conditioned on its own or partial neighbors’ observations and performed actions. To avoid this situation, four decoders are introduced to predict the next observations and rewards without neighbor agents’ policies or with partially neighbor agents, respectively. In addition, an intrinsic reward is designed to reduce the bias among different predictions and enhance learning stability. In other words, the design of the decoders and intrinsic reward is similar to the law of contra-positive. The unstable learning will cause the predicted rewards and observation transitions unstable in a decentralized way, while our decoders and intrinsic reward encourage the prediction convergent. In addition, from the perspective of information theory, the intrinsic reward design makes the policy of each agent robust to neighbours’ polices, which could make the learned policy easy to transfer.
D
   >> J = @(lambda,X,lambda0,X0,G,S) G*X-lambda*X0-lambda0*X-X*S;   % enter the Jacobian
   >> domain = {’1+x+x^2’,’1+x+x^2+x^3’, ’1+x’};   % representation of the domain for the mapping f
𝒹⁢𝒾⁢𝓂𝐟⁢(𝐱∗)+𝓇⁢𝒶⁢𝓃⁢𝓀⁢(𝐟𝐱⁢(𝐱∗))=the dimension of the domain of 𝐟.𝒹𝒾subscript𝓂𝐟subscript𝐱𝓇𝒶𝓃𝓀subscript𝐟𝐱subscript𝐱the dimension of the domain of 𝐟\mathpzc{dim}_{\mathbf{f}}(\mathbf{x}_{*})+\mathpzc{rank}\left(\,\mathbf{f}_{%
   >> domain = ones(4,1); parameter = {P,J,v};   % domain (space of 4x1 vectors) and parameters
   >> domain = {1,ones(n,k)};   % representation of the domain for the mapping g
D
Last, suppose that for some size x𝑥xitalic_x, it is fx>0subscript𝑓𝑥0f_{x}>0italic_f start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT > 0, whereas its prediction is fx′=0subscriptsuperscript𝑓′𝑥0f^{\prime}_{x}=0italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT = 0. In this case, x𝑥xitalic_x is not in the profile set P𝑃Pitalic_P. We call items of such size special.
implementation of ProfilePacking, we use the algorithm FirstFitDecreasing (?) to compute the profile packing, instead of an optimal algorithm. Specifically, FirstFitDecreasing first sorts items in the non-increasing order of their sizes and then packs the sorted sequence using FirstFit. Using FirstFitDecreasing helps reduce the time complexity, and the results only improve by using an optimal algorithm for profile packing, instead.
ProfilePacking packs these special items separately from others, using FirstFit. Algorithm 1 describes ProfilePacking in pseudocode.
14:      use FirstFit to pack σ⁢[i]𝜎delimited-[]𝑖\sigma[i]italic_σ [ italic_i ] ▷▷\triangleright▷  x𝑥xitalic_x is a special item
As stated in Section 2, we assume a discrete model in which items have integral sizes in [1,k]1𝑘[1,k][ 1 , italic_k ]. While this is a natural model for many AI applications, our algorithms can also handle fractional item sizes in [1,k]1𝑘[1,k][ 1 , italic_k ], by treating them as “special” items, in the sense that they are not predicted to appear. ProfilePacking and Hybrid(λ𝜆\lambdaitalic_λ) will then pack these fractional items separately from all integral ones, using FirstFit.
B
Although these modifications improve the quality of obtained results, their objective is to fix the deformations after patches’ stitching.
The proposed framework overcomes the limitations of previous methods. First, we theoretically solve the problem of stitching partial meshes since every chart is informed about its local neighborhood. Second, our method can easily fill the missing spaces in the final mesh by adding a new mapping for the region of interest. Because we can create an infinite number of patches using our approach, it is sufficient to locate a point in the empty space neighborhood and create an additional patch using ϕitalic-ϕ\phiitalic_ϕ function conditioned on the selected point.
To mitigate the issue of the discrete atlas, we define Continuous Atlas, a novel paradigm for meshing any object with an atlas that is leveraged in our method. In the first step, we construct a mapping that models a local structure of the object S𝑆Sitalic_S. By Continuous Atlas (𝒞⁢𝒜𝒞𝒜\mathcal{CA}{}caligraphic_C caligraphic_A), we define a mapping ϕitalic-ϕ\phiitalic_ϕ which transforms an open set U⊂ℝ2𝑈superscriptℝ2U\subset\mathbb{R}^{2}italic_U ⊂ blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT and a point p∈S𝑝𝑆p\in Sitalic_p ∈ italic_S into a local neighborhood V⁢(p)⊂S𝑉𝑝𝑆V(p)\subset Sitalic_V ( italic_p ) ⊂ italic_S of point p𝑝pitalic_p:
In this paper we propose a different approach to solve such a problem - we reformulate the classical definition of atlas to obtain maps which are correctly connected. Therefore, our method tries to suppress the issue before it even occurs in the first place.
In this paper, we introduced a novel approach for generating high-quality 3D meshes composed of 2D patches directly from raw point clouds. We presented a Continuous Atlas paradigm that allows our model, Locally Conditioned Atlas, to produce an arbitrary number of patches to form a watertight mesh. The empirical evaluation of LoCondA on three extensive experiments confirms the validity of our approach and its competitive performance.
C
\nu\right\|_{2}\leq R}\varphi(\nu).italic_ν start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ∈ start_OPERATOR roman_Arg roman_max end_OPERATOR start_POSTSUBSCRIPT ∥ italic_ν ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ≤ italic_R end_POSTSUBSCRIPT italic_φ ( italic_ν ) and roman_max start_POSTSUBSCRIPT italic_ν ∈ blackboard_R start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT end_POSTSUBSCRIPT italic_φ ( italic_ν ) = roman_max start_POSTSUBSCRIPT ∥ italic_ν ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ≤ italic_R end_POSTSUBSCRIPT italic_φ ( italic_ν ) .
≥h⁢(θ~)+⟨ν*,𝐀⁢θ~−b⟩absentℎ~𝜃superscript𝜈𝐀~𝜃𝑏\displaystyle\geq h({\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb%
Also note that h⁢(θ~)=minθ∈Θ⁡ψ⁢(θ)ℎ~𝜃subscript𝜃Θ𝜓𝜃\displaystyle h({\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{%
h⁢(θ~)+⟨ν*,𝐀⁢θ~−b⟩ℎ~𝜃superscript𝜈𝐀~𝜃𝑏\displaystyle h({\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{%
we obtain that (θ~,ν*)~𝜃superscript𝜈({\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}%
B
The inequality follows since d⁢(u)−2−ϵ⁢(u,h)≥0𝑑𝑢2italic-ϵ𝑢ℎ0d(u)-2-\epsilon(u,h)\geq 0italic_d ( italic_u ) - 2 - italic_ϵ ( italic_u , italic_h ) ≥ 0.
By intrinsic tree invariant we denote a map f:𝒯→ℝ:𝑓→𝒯ℝf:\mathscr{T}\rightarrow\mathbb{R}italic_f : script_T → blackboard_R on the set of all trees. Of particular interest
follows: suppose that there exists an intrinsic tree invariant f:𝒯→ℝ:𝑓→𝒯ℝf:\mathscr{T}\rightarrow\mathbb{R}italic_f : script_T → blackboard_R such that for every graph G𝐺Gitalic_G
Let G=(V,E)𝐺𝑉𝐸G=(V,E)italic_G = ( italic_V , italic_E ) be a directed connected graph and w:E→ℝ:𝑤→𝐸ℝw:E\rightarrow\mathbb{R}italic_w : italic_E → blackboard_R be an edge function. We call w𝑤witalic_w a discrete 1-form on G𝐺Gitalic_G. Integrating w𝑤witalic_w is the problem of finding a vertex function x:V→ℝ:𝑥→𝑉ℝx:V\rightarrow\mathbb{R}italic_x : italic_V → blackboard_R minimizing the error:
∩G:𝒯G→ℕ\cap_{G}:\mathscr{T}_{G}\rightarrow\mathbb{N}∩ start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT : script_T start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT → blackboard_N
A
\bullet}(\tau)\}\text{ is generic.}italic_σ , italic_τ ∈ italic_K , italic_σ ∩ italic_τ = ∅ ⟹ { italic_g start_POSTSUBSCRIPT ∙ end_POSTSUBSCRIPT ( italic_σ ) , italic_g start_POSTSUBSCRIPT ∙ end_POSTSUBSCRIPT ( italic_τ ) } is generic.
If we use Lemma 4.8 in place of Lemma 4.6 in the proof of Theorem 2.1, the hypothesis on the m𝑚mitalic_m-colored family ℱℱ\mathcal{F}caligraphic_F can be weakened. This “improved” Theorem 2.1 can in turn be applied in the proof of Theorem 1.2, yielding the following:
Let K𝐾Kitalic_K be a simplicial complex on n𝑛nitalic_n vertices. For any m>μ⁢(K)𝑚𝜇𝐾m>\mu(K)italic_m > italic_μ ( italic_K ) there exists a generic nontrivial chain map from C∙⁢(K)subscript𝐶∙𝐾C_{\bullet}(K)italic_C start_POSTSUBSCRIPT ∙ end_POSTSUBSCRIPT ( italic_K ) to C∙⁢(G⁢[n]m)subscript𝐶∙𝐺superscriptdelimited-[]𝑛𝑚C_{\bullet}(G[n]^{m})italic_C start_POSTSUBSCRIPT ∙ end_POSTSUBSCRIPT ( italic_G [ italic_n ] start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ).
Roughly speaking, the following “Picasso Lemma” asserts that any simplicial complex can be realized within a cubical complex via a generic chain map. (See Figure 2.)
Figure 2. The graph K5subscript𝐾5K_{5}italic_K start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT (considered as a 1-dimensional simplicial complex) realized as a subcomplex of the grid complex G⁢[5]3𝐺superscriptdelimited-[]53G[5]^{3}italic_G [ 5 ] start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT via the generic chain map given in Lemma 3.6
C
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
26
Edit dataset card