context
stringlengths 100
4.5k
| A
stringlengths 100
3.31k
| B
stringlengths 100
3.4k
| C
stringlengths 100
4.85k
| D
stringlengths 100
3.48k
| label
stringclasses 4
values |
---|---|---|---|---|---|
We note that Eq. (14) is not the only way to estimate the gradient w.r.t. IPM. In this section, we show that performing gradient descent of Φ(p0θ,q0)Φsuperscriptsubscript𝑝0𝜃subscript𝑞0\Phi(p_{0}^{\theta},q_{0})roman_Φ ( italic_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_θ end_POSTSUPERSCRIPT , italic_q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) can be equivalent to policy gradient (Section 4.1), provide analysis towards monotonic improvement (Section 4.2) and then present the algorithm design (Section 4.3). | In Eq. (14), we can observe that the critic fα∗subscript𝑓superscript𝛼f_{\alpha^{*}}italic_f start_POSTSUBSCRIPT italic_α start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT needs to provide meaningful gradients (w.r.t. the input) for the generator. If the gradient of the critic happens to be 0 at some generated data points, even if the critic’s value could still make sense, the critic would provide no signal for the generator on these points444For example, MMD with very narrow kernels can produce such critic functions, where each data point defines the center of the corresponding kernel which yields gradient 0.. Thus GANs trained with IPMs generally need to choose 𝒜𝒜\mathcal{A}caligraphic_A such that the gradient of the critic is regularized: For example, Lipschitz constraints like weight clipping (Arjovsky et al., 2017) and gradient penalty (Gulrajani et al., 2017) for WGAN, and gradient regularizers for MMD GAN (Arbel et al., 2018). | By modeling the conditional probability through the trajectory, we provide an alternative way for gradient estimation which is equivalent to policy gradient, without differentiating through the composite functions. | More concretely, we first show that performing gradient descent of the DDPM sampler w.r.t. the IPM is equivalent to stochastic policy gradient, which echoes the aforementioned RL view but with a changing reward from the optimal critic function given by IPM. In addition, we present a surrogate function that can provide insights for monotonic improvements. Finally, we present a fine-tuning algorithm with alternative updates between the critic and the generator. | We note that Eq. (14) is not the only way to estimate the gradient w.r.t. IPM. In this section, we show that performing gradient descent of Φ(p0θ,q0)Φsuperscriptsubscript𝑝0𝜃subscript𝑞0\Phi(p_{0}^{\theta},q_{0})roman_Φ ( italic_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_θ end_POSTSUPERSCRIPT , italic_q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) can be equivalent to policy gradient (Section 4.1), provide analysis towards monotonic improvement (Section 4.2) and then present the algorithm design (Section 4.3). | B |
Reveal the selected card that it is a blank card (otherwise V𝑉Vitalic_V rejects) and remove it from the stack. | After these steps, all non-blank cards from the template are placed at the corresponding positions in the area. V𝑉Vitalic_V is also convinced that these positions in the area were initially empty (consisting of all blank cards) before the protocol. | Finally, P𝑃Pitalic_P reveals all cards on the cells that contain a number (in the original Five Cell puzzle). V𝑉Vitalic_V verifies that the numbers on the cards match the numbers in the cells (otherwise V𝑉Vitalic_V rejects). P𝑃Pitalic_P also reveals all dummy cards that they are still blank. If all verification steps pass, then V𝑉Vitalic_V accepts. | Place each card from the template on top of each corresponding card from the area, creating pq𝑝𝑞pqitalic_p italic_q stacks of two cards. | Given a p×q𝑝𝑞p\times qitalic_p × italic_q matrix of cards called a template (which contains some non-blank cards, and possibly some blank cards) and another p×q𝑝𝑞p\times qitalic_p × italic_q matrix of cards representing an area from the puzzle grid, all known to P𝑃Pitalic_P but not to V𝑉Vitalic_V. A printing protocol verifies that positions in the area corresponding to non-blank cards in the template are initially empty (consisting of all blank cards). The protocol then places all non-blank cards from the template at the corresponding positions in the area, replacing the original blank cards (see Fig. 5) without revealing any card to V𝑉Vitalic_V. | A |
\mathrm{d}\pi=\mathsf{OT}_{c}(\alpha,\beta).sansserif_OT start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( italic_α start_POSTSUBSCRIPT + end_POSTSUBSCRIPT , italic_β start_POSTSUBSCRIPT + end_POSTSUBSCRIPT ) ≤ ∫ start_POSTSUBSCRIPT caligraphic_X end_POSTSUBSCRIPT italic_c roman_d italic_π start_POSTSUBSCRIPT + end_POSTSUBSCRIPT = ∫ start_POSTSUBSCRIPT caligraphic_X end_POSTSUBSCRIPT italic_c roman_d italic_π = sansserif_OT start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( italic_α , italic_β ) . | Thus, taking an infimum over α+subscript𝛼\alpha_{\scalebox{0.6}{$+$}}italic_α start_POSTSUBSCRIPT + end_POSTSUBSCRIPT gives the “≤\leq≤” inequality. | By subtracting a constant from φ𝜑\varphiitalic_φ, we can ensure that the final term takes its desired form of 2‖φ‖∞2subscriptnorm𝜑2\|\varphi\|_{\infty}2 ∥ italic_φ ∥ start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT without modifying the objective value. For compact 𝒳𝒳\mathcal{X}caligraphic_X, one can argue via Sion’s minimax theorem that the inequality above is actually an equality; for general 𝒳𝒳\mathcal{X}caligraphic_X, however, the situation is more subtle. Fortunately, we can sidestep any functional analysis by applying standard Kantorovich duality to the augmented problem from Proposition 1. The full proof in Section 6.1.3 proceeds with a careful analysis of c¯¯𝑐\bar{c}over¯ start_ARG italic_c end_ARG-concave functions under the augmented cost c¯¯𝑐\bar{c}over¯ start_ARG italic_c end_ARG to match our penalized objective. Existence of maximizers follows by a compactness and semi-continuity argument under an appropriate weak topology. | The rest of the paper is organized as follows. Section 2 opens with a discussion of preliminaries, followed by a summary of our structural results for POT in Section 3. Section 4 provides robust estimation guarantees for MDE under 𝖶pεsuperscriptsubscript𝖶𝑝𝜀\mathsf{W}_{p}^{\varepsilon}sansserif_W start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ε end_POSTSUPERSCRIPT, as well as for 𝖶pεsuperscriptsubscript𝖶𝑝𝜀\mathsf{W}_{p}^{\varepsilon}sansserif_W start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ε end_POSTSUPERSCRIPT itself as an estimate of 𝖶psubscript𝖶𝑝\mathsf{W}_{p}sansserif_W start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT. Section 5 applies our duality theory to build a robust WGAN and provides empirical results stemming from this approach, as well as comparisons to competing methods. | Taking an infimum over β𝛽\betaitalic_β gives the “≥\geq≥” inequality. Existence of a minimizer for the left minimum follows by a standard compactness argument, and our explicit construction above carries this to the right minimum as well. | D |
In this work, we propose a novel deep energy form based on the principle of minimum complementary energy (DCEM) as shown in Fig. 1, instead of the traditional minimum potential energy principle. DCEM is primarily used to solve linear elastic problems in traditional solid mechanics and has limited effectiveness in solving nonlinear problems. There are two reasons for the difficulty of applying the stress function method to nonlinear problems including geometric and material nonlinearity. In nonlinear mechanics, the material nonlinearity makes it challenging to directly apply the stress function method. | In this work, we propose a novel deep energy form based on the principle of minimum complementary energy (DCEM) as shown in Fig. 1, instead of the traditional minimum potential energy principle. DCEM is primarily used to solve linear elastic problems in traditional solid mechanics and has limited effectiveness in solving nonlinear problems. There are two reasons for the difficulty of applying the stress function method to nonlinear problems including geometric and material nonlinearity. In nonlinear mechanics, the material nonlinearity makes it challenging to directly apply the stress function method. | This work proposes an important supplementary energy form of the deep energy method. In linear elasticity, the relationship between stress and strain is linear, and the relationship between displacement and strain is also linear. Thus, there is no problem for computation based on both energy principles in linear elasticity, i.e. potential and complementary energy. However, there are lots of challenges in nonlinear problems. | As a result, if the problem is nonlinear, the governing equations of stress function not only become complex but also different materials referred to different forms of constitutive law should have different corresponding governing equations of stress function in nonlinear problems. Thus, this makes the stress function approach difficult to apply to nonlinear problems. Hence, we only solve the linear problems by DCEM in this work. | In the case of small strain linear elasticity, the nonlinear term in nonlinear geometric equations becomes negligible, leading us to disregard this nonlinear term and express the governing equations of the stress function by geometric equations of linear elasticity. | D |
The proposed consistency loss hinges on determining the optimal similarity between the representations of an input example and its transformed view, a value contingent on the strength of the composite data augmentation. However, this optimal similarity is not known a priori. To address this challenge, we adopt a data-driven approach, training a neural network to map from the composition vector of each data augmentation to the desired similarity. Recognizing the monotonic relationship between similarity and augmentation strength, we introduce and enforce a monotonicity constraint on the neural network. This constraint ensures that stronger composite data augmentations correspond to a strictly smaller valued similarity. | The proposed Contrastive Learning with Consistent Representations (CoCor) improves contrastive learning performance by exploring diverse data augmentations composed from a set of basic augmentations. | The proposed Contrastive Learning with Consistent Representations (CoCor) has the following contributions: | With consistent representations, CoCor achieves state-of-the-art results for various downstream tasks. Moreover, it can be readily integrated into existing contrastive learning frameworks, effectively imposing DA consistency on the encoder. | DA is also a key component in recent contrastive learning techniques Chen et al. (2020a); Tian et al. (2020b); He et al. (2020); Chen & He (2021); Xiao et al. (2020); Lee & Shin (2023). An encoder that learns good visual representations of the input data is trained with a contrastive loss. The contrastive loss is characterized by the following principle: in the feature space, two views of a given data example, transformed by distinct DA functions, exhibit correlation (similarity), whereas transformed views of different input examples manifest dissimilarity. The effectiveness of the encoder, trained on unlabeled data, is pivotal to the overall performance of contrastive learning and is contingent upon the choice of employed DAs. | B |
The data used for this work is derived from a customer’s banking, platform, and transaction data collected by the Mint app (Figure 3). Customers can link their bank accounts, credit cards, investment accounts, and loans to the app to receive a panoramic view of their finances and receive financial insights. Data security policies can be found at security.intuit.com. Mint has 3.6 million monthly active users with transaction history for customers that can range from three months to over a decade depending on how long the customer has used the app. | The data used for this work is derived from a customer’s banking, platform, and transaction data collected by the Mint app (Figure 3). Customers can link their bank accounts, credit cards, investment accounts, and loans to the app to receive a panoramic view of their finances and receive financial insights. Data security policies can be found at security.intuit.com. Mint has 3.6 million monthly active users with transaction history for customers that can range from three months to over a decade depending on how long the customer has used the app. | As part of linking financial accounts to the app, Mint will fetch the transaction history of customers. This includes the transaction timestamp, amount, transaction description, and transaction category. | Figure 3. The Mint app is used for tracking a customer’s transactions and managing their finances. Customers can link their savings, checking, and investments accounts and see their transaction history. | The Mint app allows customers to link their financial accounts. This includes checking and savings accounts, CDs, money market accounts, credit card accounts, and investment accounts as well as their respective balances. | D |
TABLE IV: Calibration metrics for a base neural network model. We aggregate them and calculate ranks for the first 50 time series in the yearly subset of the M4 forecasting dataset [47]. | We want to equip it with the ability to produce uncertainty estimates for predictions and corresponding confidence intervals. | TABLE V: Ranks of uncertainty estimation metrics for surrogate model type of uncertainty estimation aggregated over Forecasting data benchmark | Since Forecasting data has a lot of time series, we decided to count ranks and average it for all pairs of a base model and the corresponding uncertainty estimate for it. | TABLE IV: Calibration metrics for a base neural network model. We aggregate them and calculate ranks for the first 50 time series in the yearly subset of the M4 forecasting dataset [47]. | C |
Consider n𝑛nitalic_n locations on the face; 𝐬isubscript𝐬𝑖\mathbf{s}_{i}bold_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT records the | The NeRF takes as input a 3D location 𝐱=(x,y,z)𝐱𝑥𝑦𝑧\mathbf{x}=(x,y,z)bold_x = ( italic_x , italic_y , italic_z ) and a viewing | (x,y,z)𝑥𝑦𝑧(x,y,z)( italic_x , italic_y , italic_z ) coordinates of location i𝑖iitalic_i, and similarly | (with weights wkisubscript𝑤𝑘𝑖w_{ki}italic_w start_POSTSUBSCRIPT italic_k italic_i end_POSTSUBSCRIPT) of the transformation of vertex i𝑖iitalic_i under part | Let 𝐱(k)subscript𝐱𝑘\mathbf{x}_{(k)}bold_x start_POSTSUBSCRIPT ( italic_k ) end_POSTSUBSCRIPT denote a patch of the image centered at location k𝑘kitalic_k | B |
To inspire the researchers who want to devote themselves to this field, we list several existing challenges for promising research: | MMGCN (Wei et al., 2019) establishes a user-item bipartite graph for each modality. For each node, the topology of adjacent nodes and the modality information of the item can be used to update the feature expression of the node. Based on MMGCN, GRCN (Yinwei et al., 2021) improves the performance of recommendations by adaptively modifying the graph’s structure during model training to delete incorrect interaction data (users clicked uninterested videos). Although these methods have achieved great success in performance, these methods are still limited by using a unified way to fuse user preferences of different modalities, ignoring the difference in the degree of user preference for different modalities. | PMGT (Liu et al., 2021b) proposes a pretrained graph transformer referring to Bert’s structure. It learns item representations with two objectives: graph structure reconstruction and masked node feature reconstruction. In POG (Chen et al., 2019b), it pretrains a transformer to learn the fashion matching knowledge, and then recommends for users through a cloth generation model. Besides, it is common in sequential recommendation, where it is difficult to train the model in an end-to-end scheme. For example, in the pretraining stage, MML (Pan et al., 2022) first trains the meta-learner through meta-learning to increase model generalization, then trains the item embedding generator in the second stage. Besides, TESM (Ni et al., 2022) and Victor (Lei et al., 2021) pretrain a well-designed graph neural network and a video transformer, respectively. Recently, some more advanced techniques have been adapted for higher training efficiency, such as knowledge distillation and prompt tuning. As for the former one, SGFD (Liu et al., 2023) distills a lighter modality encoder from a pretrained modality encoder, when finetuning for the recommendation task. Also, PromptMM (Wei et al., 2024) proposes a pretrain-prompt scheme to achieve easier finetuning and higher task adaptability. | As multimodal data differs from user interaction data, it contains much information unrelated to user preferences. For example, as shown in Figure 2(c), the interaction between movie 3333 and user 1111 is noisy, which should be removed. Filtering out noisy data in multimodal recommendation tasks can usually improve the recommendation performance. It is worth noting that noise can exist in the interaction graph or multimodal feature itself, so filtration can be embedded in the bridge and fusion, respectively. | It is worth noting that though some methods for different stages in a model are proposed (Lei et al., 2021), there is no up-to-date universal solution with the combinations of these techniques provided. | D |
The idea is that the last agent is present throughout the entire allocation, and thus evaluates all n𝑛nitalic_n bundles. | However, when we only take one valuation into consideration, we lose a lot of information about the allocation. For example, it is possible that the threshold of the last agent is 10101010, but one of the allocated bundles has three chores whose costs to the last agent are {6,7,3}673\{6,7,3\}{ 6 , 7 , 3 }; such a bundle cannot appear in a valid output of FFD with the valuation of the last agent. | In particular, we give a necessary condition for the output of the HFFD algorithm in terms of the cost function of the agent who gets the last bundle Ansubscript𝐴𝑛A_{n}italic_A start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT. We denote this agent by ω𝜔\omegaitalic_ω, such that σ(ω)=n𝜎𝜔𝑛\sigma(\omega)=nitalic_σ ( italic_ω ) = italic_n. We call this agent the last agent. | Intuitively, we focus on the view of the last agent ω𝜔\omegaitalic_ω because the last agent participates in “bidding” over all the previous n−1𝑛1n-1italic_n - 1 bundles, so the costs of all bundles are large for this agent. | which the allocation is a possible output of FFD, such that the unallocated chores and the threshold of the last agent remain the same. | A |
The order in which values are chosen when assigning variables is decided by a value ordering heuristic; for COP instances, it is highly recommended to use first the value present in the last found solution, which is a technique known as solution(-based phase) saving [36, 13]. | Backtrack search for COP relies on an optimization strategy based on decreasingly updating the maximal bound (assuming minimization) whenever a solution is found; this is a kind of ramp-down strategy (related to Branch and Bound), | The order in which values are chosen when assigning variables is decided by a value ordering heuristic; for COP instances, it is highly recommended to use first the value present in the last found solution, which is a technique known as solution(-based phase) saving [36, 13]. | it seems to us that focusing only on proof from the start is not necessarily the right approach, especially with the new advances made in search (notably, solution saving and the three complementary conflict-based heuristics). | Hence, this ramp-down strategy provides a sequence of better and better solutions until no more exist, guaranteeing that the last found solution is optimal. | A |
This assumption limits the family of functions, but it allows for deeper and broader architectures due to the reduced time complexity. | y(tsimj)𝑦subscript𝑡subscriptsim𝑗\displaystyle y(t_{\text{sim}_{j}})italic_y ( italic_t start_POSTSUBSCRIPT sim start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) | y(tsimj)𝑦subscript𝑡subscriptsim𝑗\displaystyle y(t_{\text{sim}_{j}})italic_y ( italic_t start_POSTSUBSCRIPT sim start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) | This sequence includes both the original timestamps and additional points sampled uniformly within the intervals between them. The output value for each time in Ssimsubscript𝑆simS_{\text{sim}}italic_S start_POSTSUBSCRIPT sim end_POSTSUBSCRIPT is: | Let’s define the sequence Ssimsubscript𝑆simS_{\text{sim}}italic_S start_POSTSUBSCRIPT sim end_POSTSUBSCRIPT as: | D |
}\cdot Q_{X}).italic_D ( italic_P start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ∥ italic_Q start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ) ≥ italic_D ( italic_P start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ∥ italic_Q start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ) - italic_D ( italic_W start_POSTSUBSCRIPT italic_Y | italic_X end_POSTSUBSCRIPT ⋅ italic_P start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ∥ italic_W start_POSTSUBSCRIPT italic_Y | italic_X end_POSTSUBSCRIPT ⋅ italic_Q start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ) . | In this case, ℱ3subscriptℱ3{\cal F}_{3}caligraphic_F start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT is given as | In this case, ℱ3subscriptℱ3{\cal F}_{3}caligraphic_F start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT is given as | In this case, ℱ3subscriptℱ3{\cal F}_{3}caligraphic_F start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT is given as | In this case, ℱ3subscriptℱ3{\cal F}_{3}caligraphic_F start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT is given as | A |
-\frac{1}{3}&0&0&\cdots&0&0\end{pmatrix}_{2n\times 2n}caligraphic_L start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 12 end_POSTSUBSCRIPT = ( start_ARG start_ROW start_CELL - divide start_ARG 1 end_ARG start_ARG 3 end_ARG end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL ⋯ end_CELL start_CELL 0 end_CELL start_CELL - divide start_ARG 1 end_ARG start_ARG 3 end_ARG end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL ⋯ end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL - divide start_ARG 1 end_ARG start_ARG 3 end_ARG end_CELL start_CELL ⋯ end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL ⋮ end_CELL start_CELL ⋮ end_CELL start_CELL ⋮ end_CELL start_CELL ⋱ end_CELL start_CELL ⋮ end_CELL start_CELL ⋮ end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL ⋯ end_CELL start_CELL - divide start_ARG 1 end_ARG start_ARG 3 end_ARG end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL - divide start_ARG 1 end_ARG start_ARG 3 end_ARG end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL ⋯ end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW end_ARG ) start_POSTSUBSCRIPT 2 italic_n × 2 italic_n end_POSTSUBSCRIPT. | ℒ12′=(−1300⋯0−13000⋯0000−13⋯00⋮⋮⋮⋱⋮⋮000⋯−130−1300⋯00)2n×2nsubscriptsuperscriptℒ′12subscriptmatrix1300⋯013000⋯000013⋯00⋮⋮⋮⋱⋮⋮000⋯1301300⋯002𝑛2𝑛\mathcal{L}^{\prime}_{12}=\begin{pmatrix}-\frac{1}{3}&0&0&\cdots&0&-\frac{1}{3% | ℒ11=(1−130⋯0−13−131−13⋯000−131⋯00⋮⋮⋮⋱⋮⋮000⋯1−13−1300⋯−131)2n×2nsubscriptℒ11subscriptmatrix1130⋯01313113⋯000131⋯00⋮⋮⋮⋱⋮⋮000⋯1131300⋯1312𝑛2𝑛\mathcal{L}_{11}=\begin{pmatrix}1&-\frac{1}{3}&0&\cdots&0&-\frac{1}{3}\\ | and ℒ11′=(1−130⋯00−131−13⋯000−131⋯00⋮⋮⋮⋱⋮⋮000⋯1−13000⋯−131)2n×2nsubscriptsuperscriptℒ′11subscriptmatrix1130⋯0013113⋯000131⋯00⋮⋮⋮⋱⋮⋮000⋯113000⋯1312𝑛2𝑛~{}\mathcal{L}^{\prime}_{11}=\begin{pmatrix}1&-\frac{1}{3}&0&\cdots&0&0\\ | ℒA=ℒA′=([cccc|cccccc]10⋯00−1300⋯001⋯0000−13⋯0⋮⋮⋱⋮⋮⋮⋮⋮⋱⋮00⋯10000⋯−1300⋯023−1300⋯−13−130⋯0−131−130⋯000⋯00−1323−13⋯00−13⋯000−131⋯0⋮⋮⋱⋮⋮⋮⋮⋮⋱⋮00⋯−13−13000⋯1)3n×3n,subscriptℒ𝐴superscriptsubscriptℒ𝐴′subscriptmatrixdelimited-[]conditional𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐10⋯001300⋯001⋯000013⋯0⋮⋮⋱⋮⋮⋮⋮⋮⋱⋮00⋯10000⋯13missing-subexpression00⋯0231300⋯13130⋯0131130⋯000⋯00132313⋯0013⋯000131⋯0⋮⋮⋱⋮⋮⋮⋮⋮⋱⋮00⋯1313000⋯13𝑛3𝑛\mathcal{L}_{A}=\mathcal{L}_{A}^{\prime}=\begin{pmatrix}[cccc|cccccc]1&0&% | D |
The principal now posts an ambiguous contract, the agent observes the ambiguous contract and chooses an action and bears the attendant cost, a payment function is selected from the support of the ambiguous contract, an outcome is drawn from the distribution over outcomes induced by that action, and the principal makes the payment to the agent specified by the selected contract. | We work with a familiar hidden-action moral hazard problem, as in Holmström, (1979), Grossman and Hart, (1983), and Laffont and Martimort, (2009, Chapter 4), with the friction arising out of limited liability (as in Innes, (1990)) rather than risk aversion. In contrast to much of the moral hazard literature, our principal offers ambiguous contracts to an ambiguity-averse agent. We implement the agent’s ambiguity aversion by modeling the agent as maximizing his max-min utility (Schmeidler,, 1989; Gilboa and Schmeidler,, 1993). | The agent is a max-min expected utility maximizer (Schmeidler,, 1989; Gilboa and Schmeidler,, 1993), and so evaluates each action i𝑖iitalic_i according to the payment function that minimizes the expected payment of the action. | A classic contract for this setting includes a payment function t=(t1,…,tm)𝑡subscript𝑡1…subscript𝑡𝑚t=(t_{1},\ldots,t_{m})italic_t = ( italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_t start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ), where tjsubscript𝑡𝑗t_{j}italic_t start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT specifies the non-negative payment from the principal to the agent when outcome j∈[m]𝑗delimited-[]𝑚j\in[m]italic_j ∈ [ italic_m ] is realized. Given a payment function t𝑡titalic_t, the agent chooses an action i∈[n]𝑖delimited-[]𝑛i\in[n]italic_i ∈ [ italic_n ] that maximizes his expected payment minus cost. The principal, in turn receives the expected reward of the implemented action, minus the expected payment to the agent under the chosen action. | The worst payment function in τ𝜏\tauitalic_τ for action 2222 is t2superscript𝑡2t^{2}italic_t start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, giving the agent an expected payment of 00. Similarly, the worst payment function for action 3333 is t1superscript𝑡1t^{1}italic_t start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT, for an expected payment of 00. Thus, both actions 2222 and 3333 give the agent negative utilities. In contrast, the expected payment for action 4444 is 3/4343/43 / 4 under both t1superscript𝑡1t^{1}italic_t start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT and t2superscript𝑡2t^{2}italic_t start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, giving the agent an expected utility of 00. The ambiguous contract ⟨τ,4⟩𝜏4\langle\tau,4\rangle⟨ italic_τ , 4 ⟩ thus implements action 4444, with an expected payment of 3/4343/43 / 4, and an expected utility for the principal of 5/4545/45 / 4 strictly higher than her optimal utility under a classic contract. | B |
ALOOKL stability bounds bias: Applied to our setting, let ℳℳ\mathcal{M}caligraphic_M be an ALOOKL stable randomized algorithm that takes as input a sample set and outputs some test ψ:X1→[0,1]:𝜓→superscript𝑋101\psi:X^{1}\to[0,1]italic_ψ : italic_X start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT → [ 0 , 1 ]. Then, if we sample 𝑺∼𝒟nsimilar-to𝑺superscript𝒟𝑛\bm{S}\sim\mathcal{D}^{n}bold_italic_S ∼ caligraphic_D start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT and a test function 𝝍∼ℳ(𝑺)similar-to𝝍ℳ𝑺{\bm{\psi}}\sim\mathcal{M}(\bm{S})bold_italic_ψ ∼ caligraphic_M ( bold_italic_S ), the expectation of 𝝍(𝑺)𝝍𝑺{\bm{\psi}}(\bm{S})bold_italic_ψ ( bold_italic_S ) is close to the expectation of 𝝍(𝒙)𝝍𝒙{\bm{\psi}}(\bm{x})bold_italic_ψ ( bold_italic_x ) where 𝒙∼𝒟similar-to𝒙𝒟\bm{x}\sim\mathcal{D}bold_italic_x ∼ caligraphic_D is a fresh sample independent of 𝝍𝝍{\bm{\psi}}bold_italic_ψ. | In Section 5, we generalize these ideas to other choices of m𝑚mitalic_m. Feldman and Steinke also show how to bound the mutual information using ALOOKL stability, corresponding to Theorem 7 in the case of m=1𝑚1m=1italic_m = 1 [FS18]. Our techniques are similar to theirs, appropriately generalized for m>1𝑚1m>1italic_m > 1. Our main contributions to this notion of stability is the idea to generalize the definition to the m>1𝑚1m>1italic_m > 1 case as well as a unique presentation, particularly the introduction of half-conditional entropy. | The second claim, composition, was proven by Feldman and Steinke in their work introducing ALOOKL stability [FS18]. We therefore only need to sketch the first and third claims. | The second and third experiments are close because, in both cases, the input to the test (𝒙𝒙\bm{x}bold_italic_x for the second experiment and 𝑺𝒊subscript𝑺𝒊\bm{S}_{\bm{i}}bold_italic_S start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT for the third experiment) is distributed according to 𝒟𝒟\mathcal{D}caligraphic_D and independent of the test used (𝝋𝝋{\bm{\varphi}}bold_italic_φ in the second experiment and 𝝋′superscript𝝋′{\bm{\varphi}}^{\prime}bold_italic_φ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT in the third). For the second experiment, this independence is immediate from the experiment setup. For the third experiment, it holds because the drawing of 𝝋′superscript𝝋′{\bm{\varphi}}^{\prime}bold_italic_φ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT only depends on 𝑺−𝒊subscript𝑺𝒊\bm{S}_{-\bm{i}}bold_italic_S start_POSTSUBSCRIPT - bold_italic_i end_POSTSUBSCRIPT which is independent of 𝑺𝒊subscript𝑺𝒊\bm{S}_{\bm{i}}bold_italic_S start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT. Therefore, the only difference between the second and third experiments is in the marginal distributions of 𝝋𝝋{\bm{\varphi}}bold_italic_φ and 𝝋′superscript𝝋′{\bm{\varphi}}^{\prime}bold_italic_φ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, and this difference is once again guaranteed to be small by the ALOOKL stability condition. | The goal is to show that the first and second experiments have similar expectations. We do this by using the third experiment as a bridge. The first and third experiments are close by the guarantee of ALOOKL stability – in particular, even conditioned on the choice of 𝑺𝑺\bm{S}bold_italic_S and 𝒊𝒊\bm{i}bold_italic_i (which fixes 𝑺𝒊subscript𝑺𝒊\bm{S}_{\bm{i}}bold_italic_S start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT), the distributions of 𝝋𝝋{\bm{\varphi}}bold_italic_φ and 𝝋′superscript𝝋′{\bm{\varphi}}^{\prime}bold_italic_φ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT will be close. | B |
A graph F𝐹Fitalic_F is outerplanar if it does not have K4subscript𝐾4K_{4}italic_K start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT or K2,3subscript𝐾23K_{2,3}italic_K start_POSTSUBSCRIPT 2 , 3 end_POSTSUBSCRIPT as a minor. Equivalent, it is outerplanar if it has a planar drawing such that all its vertices lie on the same face [syslo_characterisations_1979]. | The class of unlabelled graphs underlying an element of ℒ1+superscriptsubscriptℒ1\mathcal{L}_{1}^{+}caligraphic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT coincides with the class of graphs of treewidth at most two. | The class of graphs underlying the elements of ℒtsubscriptℒ𝑡\mathcal{L}_{t}caligraphic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and the class of graphs underlying the elements of ℒt+superscriptsubscriptℒ𝑡\mathcal{L}_{t}^{+}caligraphic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT are minor-closed and union-closed. | The class of unlabelled graphs underlying an element of ℒ1subscriptℒ1\mathcal{L}_{1}caligraphic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT coincides with the class of outerplanar graphs. | The classes ℒ1subscriptℒ1\mathcal{L}_{1}caligraphic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and ℒ1+superscriptsubscriptℒ1\mathcal{L}_{1}^{+}caligraphic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT can be identified as the class of outerplanar graphs and as the class of graphs of treewidth at most two, respectively. | C |
We recruited 32 participants (17 males, 15 females) via mailing lists and word of mouth, who are mainly from STEM (Science, Technology, Engineering, and Mathematics) fields and business schools, between 19 to 37 years old (M=26,SD=3.57formulae-sequence𝑀26𝑆𝐷3.57M=26,SD=3.57italic_M = 26 , italic_S italic_D = 3.57) with differed experience on robots. After the experiment, the participants were given a questionnaire to collect their demographics, and experience with robots. | The participant should start from position C and move towards the Table in position D to deliver the yellow cup, during which a non-contact interaction between the participant and the Spot was recorded by the motion capture cameras. In the non-stationary conditions, the Spot robot started moving from B to A the moment the participant started moving from C to D. Participants were told to feel free about their walking speed and choice of paths. The walking process of the robot is fully autonomous and so were the participants informed before the experiment. Additionally, the robot obstacle avoidance is disabled so that Spot won’t go off the track. The position and orientation information from the OptiTrack system were recorded as soon as the Spot started to move. After 8 repeats, the whole trajectory data of one participant can be reconstructed in 2D as shown in Figure 5. Because of the unpredictable participant height and the fluctuation of height during walking, the Z-axis coordinate is not considered when calculating personal distance. Due to the fact that the Spot is a legged canine robot with no wheels, it is unlikely for it to walk repetitively on a precise same straight route, as a result of which there are small vertical offsets in the Spot’s trajectories. | At the setup stage, six OptiTrack motion cameras were mounted around the experiment zone to capture the in-situ position of the marker rigid bodies in sight. The position and orientation information of the rigid bodies were multi-casted in a local network with ROS built-in UDP communication. The origin of the OptiTrack 3D space coordinate was fixed at the floor center of the lab, then the whole equipment was set up and well-calibrated. | The lab is equipped with an OptiTrack motion capture system, which functions in the outside-in [39] tracking principle. 6 motion cameras are mounted around the experiment zone to take 2D aligned pictures of passive markers on objects, according to the position of retroreflective markers on 2D frames to calculate the real world 3D marker position. The Motive software transfers certain shapes formed by markers into a rigid body, the markers were installed asymmetrically so that the orientation can be identified as in Figure 2. The rigid body coordinate system is left-handed, the same as the world coordinate. The rigid body parameters will be stored in OptiTrack configurations to make them recognizable in every experiment setup. With a sufficient frame rate, the system can capture the in-situ position of the marker rigid bodies in sight. The rigid bodies’ information on positions and orientations is sampled at the rate of 100 Hz. The position information is then multi-casted in a local network with Robot Operating System (ROS)Virtual-Reality Peripheral Network (VRPN)communication toolkit using the UDP protocol to guarantee communication speed. | In our research, the motion capture system will only track the position and orientation of objects instead of their motions. As a result, marker rigid bodies will take the place of marker skeletons, which are more common in motion capturing. See in Figure 2, rigid bodies are formed by 4 or more markers on the same plane, with a clear pre-set pivot to label its orientation. The position and orientation information will be captured at the sampling rate of 100 Hz for later trajectory reconstruction and distance extraction. | B |
This section presents a simulation study of human inverse kinematics (IK) using the methods described above. To generate a trajectory, it is essential to ensure that both the starting and final points are within the workspace of the lower limb, as illustrated in Figure 4. According to [24], the average walking speed for adults without mobility issues is between 1111 and 1.5m/s1.5m/s1.5\ \text{m/s}1.5 m/s. For this application example, the velocity and acceleration at the beginning and end of the motion are 1.33m/s1.33m/s1.33\ \text{m/s}1.33 m/s and 0m/s20superscriptm/s20\ \text{m/s}^{2}0 m/s start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, respectively, as shown in Table 3. | Figure 5: Joints angular tracking using CCD method. For this simulation, θ1min=0∘superscriptsubscript𝜃1𝑚𝑖𝑛superscript0\theta_{1}^{min}=0^{\circ}italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_i italic_n end_POSTSUPERSCRIPT = 0 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT, θ1max=120∘superscriptsubscript𝜃1𝑚𝑎𝑥superscript120\theta_{1}^{max}=120^{\circ}italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_x end_POSTSUPERSCRIPT = 120 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT, θ2min=0∘superscriptsubscript𝜃2𝑚𝑖𝑛superscript0\theta_{2}^{min}=0^{\circ}italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_i italic_n end_POSTSUPERSCRIPT = 0 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT, θ2max=117∘superscriptsubscript𝜃2𝑚𝑎𝑥superscript117\theta_{2}^{max}=117^{\circ}italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_x end_POSTSUPERSCRIPT = 117 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT, θ3min=51∘superscriptsubscript𝜃3𝑚𝑖𝑛superscript51\theta_{3}^{min}=51^{\circ}italic_θ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_i italic_n end_POSTSUPERSCRIPT = 51 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT and θ3max=126∘superscriptsubscript𝜃3𝑚𝑎𝑥superscript126\theta_{3}^{max}=126^{\circ}italic_θ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_x end_POSTSUPERSCRIPT = 126 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT. | Figure 13: Joints angular tracking using MOOGA method. For this simulation, θ1min=16∘superscriptsubscript𝜃1𝑚𝑖𝑛superscript16\theta_{1}^{min}=16^{\circ}italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_i italic_n end_POSTSUPERSCRIPT = 16 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT, θ1max=68∘superscriptsubscript𝜃1𝑚𝑎𝑥superscript68\theta_{1}^{max}=68^{\circ}italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_x end_POSTSUPERSCRIPT = 68 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT, θ2min=20∘superscriptsubscript𝜃2𝑚𝑖𝑛superscript20\theta_{2}^{min}=20^{\circ}italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_i italic_n end_POSTSUPERSCRIPT = 20 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT, θ2max=105∘superscriptsubscript𝜃2𝑚𝑎𝑥superscript105\theta_{2}^{max}=105^{\circ}italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_x end_POSTSUPERSCRIPT = 105 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT, θ3min=84∘superscriptsubscript𝜃3𝑚𝑖𝑛superscript84\theta_{3}^{min}=84^{\circ}italic_θ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_i italic_n end_POSTSUPERSCRIPT = 84 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT and θ3max=120∘superscriptsubscript𝜃3𝑚𝑎𝑥superscript120\theta_{3}^{max}=120^{\circ}italic_θ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_x end_POSTSUPERSCRIPT = 120 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT. | Acceleration (m/s2𝑚superscript𝑠2m/s^{2}italic_m / italic_s start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT) | }}\right|italic_D start_POSTSUBSCRIPT italic_C italic_o italic_M end_POSTSUBSCRIPT = | divide start_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT italic_M start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s italic_e italic_g end_POSTSUPERSCRIPT italic_C italic_o italic_M start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG start_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT italic_M start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s italic_e italic_g end_POSTSUPERSCRIPT end_ARG | | C |
Let xk+1subscript𝑥𝑘1x_{k+1}italic_x start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT be the smallest element (in the linear extension) such that xk+1∈Rk+1∩Xsubscript𝑥𝑘1subscript𝑅𝑘1𝑋x_{k+1}\in R_{k+1}\cap Xitalic_x start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT ∈ italic_R start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT ∩ italic_X if there is such an element. | The analog of Algorithm 2 is the test sampling a random set of vertices and accepting the graph if the subgraph spanned by them is Kχsubscript𝐾𝜒K_{\chi}italic_K start_POSTSUBSCRIPT italic_χ end_POSTSUBSCRIPT-free. We need the same number of samples as in the case of posets. The following theorem is a straightforward consequence of Theorem 1.4. | The following proposition shows that Theorem 1.4 gives the right order of magnitude on the number of samples required for one-sided testing. | This gives the right order of magnitude of the number of samples required for the one-sided testing of Chsubscript𝐶ℎC_{h}italic_C start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT-free posets for every fixed hℎhitalic_h: Theorem 1.4 shows that using ⌈4log(h)+4c+12ε⌉4ℎ4𝑐12𝜀\left\lceil\frac{4\log(h)+4c+1}{2\varepsilon}\right\rceil⌈ divide start_ARG 4 roman_log ( italic_h ) + 4 italic_c + 1 end_ARG start_ARG 2 italic_ε end_ARG ⌉ samples the error probability is at most e−csuperscript𝑒𝑐e^{-c}italic_e start_POSTSUPERSCRIPT - italic_c end_POSTSUPERSCRIPT, while Proposition 2.4 gives an example where the error is at least e−csuperscript𝑒𝑐e^{-c}italic_e start_POSTSUPERSCRIPT - italic_c end_POSTSUPERSCRIPT when sampling at most c2ε𝑐2𝜀\frac{c}{2\varepsilon}divide start_ARG italic_c end_ARG start_ARG 2 italic_ε end_ARG elements. | For any fixed hℎhitalic_h, our bound gives the right order of magnitude (in ε𝜀\varepsilonitalic_ε) on the necessary number of samples for one-sided testing of Chsubscript𝐶ℎC_{h}italic_C start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT-free posets, see Proposition 2.4. | B |
Conflict Avoidance: It applies a unique strategy for all data. For example, it prioritizes data from trusted sources over others. | Conflict Ignorance: The conflict is not handled but the different attribute values may be retained or the problem can be delegated to the user application. | Conflict Resolution: It considers all data and metadata before applying a decision to apply a specified strategy, such as taking the most frequent, most recent or a randomly selected value. | Moreover, they apply input quality assessment metrics to filter out the values below a threshold or keep the values with the highest quality assessment. Other techniques such as computing average, minimum, and maximum or taking the most frequent values are provided by their data integration framework. | Conflict Avoidance: It applies a unique strategy for all data. For example, it prioritizes data from trusted sources over others. | B |
Dual quaternion algebra has been highlighted in numerous works, including the dynamics modeling of a mobile manipulator [5], stabilization of rigid body motion, multiple body interactions [6], inverse kinematic study of 6-DOF robot arms, and tracking control [7, 8]. For instance, Valverde et al. [9] presented a serial manipulator dynamics model using the recursive Newton-Euler method based on dual quaternions. Similarly, Silva et al. [10] employed the recursive Newton-Euler method and Gauss’s Principle of Least Constraint, both based on dual quaternions, to describe the relationships between joint velocities, forces, and torque variables of mobile manipulators | In this paper, the dual quaternion-based theory is applied to the kinematics and dynamics study of the 7-DOF human lower limbs in 3D space. Subsequently, the artificial neural networks method is used to solve the inverse kinematics problem. The efficiency of the artificial neural networks method is verified using the jerk energy criteria. The rest of this paper is organized as follows: Section 2 provides a brief mathematical background on dual quaternions algebra. Section 3 elaborates on the forward kinematics of the human lower limb in 3D space using dual quaternions. Section 4 focuses on the application of the artificial neural network method to solve the inverse kinematics of the lower limb. In Section 5, the dynamical model of the lower limb using dual quaternions based on a recursive Newton-Euler method is developed. Finally, in Section 6, the simulation results are discussed. | The primary objective of this paper was to leverage dual quaternions algebra for describing the kinematics, encompassing position and orientation, as well as the dynamics modeling of an anthropomorphic leg in 3D-space, thereby circumventing the high computational costs associated with homogeneous transformation methods. To achieve this, artificial neural networks (ANN) were employed to solve the inverse kinematics (IK) problem while adhering to range of motion constraints, and the minimum energy criterion was applied to ensure realistic human posture. Additionally, the Newton-Euler recursive method based on dual quaternions was chosen for dynamics modeling to mitigate the complexities associated with geometric analyses. | In this section, the Forward Kinematics (FK) of the lower limbs, depicted in Figure 1, using dual quaternions is established. FK involves computing the positions and orientations of the end-effector in task space from the axes and angles of the joint rotations. The lower limb is decomposed into four segments: the pelvis, thigh, leg, and foot, connected by three joint groups. These include the hip, which rotates about three perpendicular axes; the knee, which moves solely about the z-axis; and the ankle, permitting movement in three planes. Therefore, the degrees of freedom (DOF) of the lower limbs total 7 [16]. Consequently, the position of the end-effector relative to the reference frame ℛ3ℛ3\mathscr{R}3script_R 3, denoted as PE/3𝑃𝐸3P{E/3}italic_P italic_E / 3, can be expressed as: | This section focuses on the dynamic description of the lower limb shown in Figure 2 using the Dual Quaternion-based recursive Newton-Euler method. This method involves calculating the velocities and accelerations of the center of mass of each link, known as twists, based on the positions, velocities, and accelerations of the lower limb configuration. These calculations adhere to the Newton-Euler propagation law. Subsequently, the wrenches, representing forces and moments acting on each link in 3D space, are derived starting from the wrenches applied to the end effector. | A |
Another question we still need to ask is what order should we use to approximate f−h𝑓ℎf-hitalic_f - italic_h? We will see that | If the objective f𝑓fitalic_f is such that we can afford to access its gradients and Hessians from time to time (functions of the form (1) with n<∞𝑛n<\inftyitalic_n < ∞ being “reasonable”), then we can do better than the previous chapter. In this case, we can use a better approximation of the term f(𝒚)−h(𝒚)𝑓𝒚ℎ𝒚f({\bm{y}})-h({\bm{y}})italic_f ( bold_italic_y ) - italic_h ( bold_italic_y ). From a theoretical point of view, we can treat the case when f𝑓fitalic_f is only differentiable once, and thus, we can only use a first-order approximation of f−h𝑓ℎf-hitalic_f - italic_h; in this case, we will only be using the Hessian of the helper hℎhitalic_h but only gradients of f𝑓fitalic_f. However, in our case, if we assume we have access to gradients, then we can also have access to the Hessians of f𝑓fitalic_f as well (from time to time); for this reason, we consider a second-order approximation of the term f−h𝑓ℎf-hitalic_f - italic_h. If we follow the procedure that we described above, we find: | Combining the two approximations for hℎhitalic_h and f−h𝑓ℎf-hitalic_f - italic_h we get the following model of our objective f𝑓fitalic_f: | The general idea is the following: imagine that, besides the objective function f𝑓fitalic_f, we have access to a helper function hℎhitalic_h that we think is similar in some sense (that we will define later) to f𝑓fitalic_f and thus it should help to minimize it. | We also need to address the question of measuring the similarity in this case. Since we employ a second-order approximation of f−h𝑓ℎf-hitalic_f - italic_h, it seems natural to compare the function f𝑓fitalic_f and its helpers h1subscriptℎ1h_{1}italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and h2subscriptℎ2h_{2}italic_h start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT by using the difference between their third derivatives or, equivalently, the Hessian Lipschitz constant of their difference. Precisely, we make the following similarity assumption: | B |
}\mathcal{CN}(\mathbf{0},\mathbf{I}_{N})bold_f start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT = square-root start_ARG italic_β start_POSTSUBSCRIPT bold_f start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT end_POSTSUBSCRIPT end_ARG over~ start_ARG bold_f end_ARG start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT , bold_f start_POSTSUPERSCRIPT italic_Y end_POSTSUPERSCRIPT = square-root start_ARG italic_β start_POSTSUBSCRIPT bold_f start_POSTSUPERSCRIPT italic_Y end_POSTSUPERSCRIPT end_POSTSUBSCRIPT end_ARG over~ start_ARG bold_f end_ARG start_POSTSUPERSCRIPT italic_Y end_POSTSUPERSCRIPT ; over~ start_ARG bold_f end_ARG start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT , over~ start_ARG bold_f end_ARG start_POSTSUPERSCRIPT italic_Y end_POSTSUPERSCRIPT start_RELOP overOVERACCENT start_ARG ∼ end_ARG end_RELOP caligraphic_C caligraphic_N ( bold_0 , bold_I start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT ). All terms of the form βxsubscript𝛽𝑥\beta_{x}italic_β start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT represent the pathloss factor in x𝑥xitalic_xth link. | The base stations (BSs) of operators X and Y (referred to as BS-X and BS-Y, respectively) and the UEs are equipped with a single antenna, and all the channels in the systems undergo frequency-flat fading.111Extension to general cases with multiple antennas and frequency selective channels does not change the main message, and is left to future work. An N𝑁Nitalic_N-element IRS is deployed by operator X in order to enhance the quality of service (QoS) to the UEs being served by it. That is, operator X configures the IRS with the optimal phase configuration for a UE scheduled by BS-X in every time slot. | In practice, multiple network operators co-exist in a given geographical area, each operating in different frequency band. As a consequence, at a given point in time, multiple UEs are served by different operators in the system. In such a scenario, if an IRS is optimized to cater to the needs of one of the operators, it is not clear whether the IRS will boost or degrade the performance of the other operators in the system. In particular, since the IRS elements are passive, they will reflect the RF signals impinging on them in all frequency bands. So, it is important to understand how an IRS which is controlled by only one operator affects the performance of other operators (called as out-of-band operator in this paper). Although a very few works consider the scenario of the presence of an IRS in multi-band systems [4, 5], these works proceed along the lines of jointly optimizing the IRS phase configurations among all the operators. This approach requires inter operator co-ordination, which is not practical. Moreover, the solutions and analysis provided in these works are not scalable with number of operators (or frequency bands) in the system. More fundamentally, none of these works address the question of the out-of-band (OOB) performance even in the scenario of two operators operating in non-overlapping bands and the IRS is optimized for only one operator. In this paper, we address this question, and to the best of our knowledge, this is the first work which considers the effect of OOB performance due to the presence of an IRS under practical cellular network deployment scenarios. | As mentioned earlier, in this work, we consider a scenario where operator X deploys and controls an IRS in order to enhance the throughput of the users being served by it, and are interested in the effect of the IRS on an operator Y that is providing services in a different frequency band. Thus, in order to serve the k𝑘kitalic_kth UE, operator X configures the IRS with the rate-optimal phase angles [1, 2],[7, Lemma 1] | We consider a system with two network operators providing service in non-overlapping frequency bands. We analyze the OOB throughput performance in the presence of an IRS that is optimized to serve the users subscribed to an operator offering wireless services in a different frequency band. Specifically, | C |
Or the defined effect I(S)𝐼𝑆I(S)italic_I ( italic_S ) is just a mathematical game without clear meanings. | Furthermore, if a DNN learns meaningful concepts, then these concepts are supposed to exhibit certain discrimination power in the classification task. | Therefore, in this study, we examine the counter-intuitive conjecture that a DNN learns symbolic concepts from the following four perspectives. | To this end, we believe that if a well-trained DNN really encodes certain concepts, then the concepts are supposed to satisfy the following four requirements. | Furthermore, if a DNN encodes faithful symbolic concepts, then these concepts are supposed to exhibit certain discrimination power in the classification task. | C |
∙∙\bullet∙ Besides, normal DNNs usually learn low-order interactive concepts faster than over-fitted DNNs. | Then, the Harsanyi dividend (or Harsanyi interaction) (Harsanyi 1963) is used to quantify the effect of the interaction between a set S⊆N𝑆𝑁S\subseteq Nitalic_S ⊆ italic_N of input variables. | Interactive concepts vs. cognitive concepts and other interaction metrics. Although the Harsanyi interactive concept seems partially aligned with humans’ cognition to some extent (Cheng et al. 2021b), we do not think such interactive concepts exactly fit humans’ cognition. More crucially, the mathematical generalization power of a concept (defined in Equation (3)) does not depend on whether the concept fits human cognition. To this end, Ren et al. (2023) have proved that the Harsanyi interaction could represent primitives of inference logic of a DNN, which was already sufficient for our research. Please see Section 2 in supplemental materials for detailed comparisons between the Harsanyi interaction and other interaction metrics. | Although Ren et al. (2023) did not convince us that the above interaction really represented a concept that fits human cognition, they did provide mathematical supports for such interactions. | In this paper, we follow Ren et al. (2023) to take the Harsanyi interaction as a simplified definition of concepts or primitives encoded by a DNN. These interactions are proved to well mimic network outputs under different input variations, so we can roughly consider such concepts as primitives to analyze the DNN. Our analysis does not require the exact fitness between the concept and human cognition. | B |
We formulated our activation function in the following order. First, we used the hyperbolic tangent function as a basic framework, and then multiplied by the identity function to show the behavior of the identity function in the positive integer region. Lastly, we composited the exponential function to the hyperbolic tangent function to make it converge to zero asymptotically in the negative integer region. | When we expanded the Mish[6] using a Taylor series, surprisingly, we happened to know that our activation function is related to the Mish. | The robust property of MoLU is to approach rapidly to the value of minimum of a loss function without losing stability. This is a truly useful characteristic when training long time-series data by using NeuralODEs(Neural Ordinary Differential Equations). To prove the performance of MoLU, we conducted experiment on NeuralODEs, MNIST, and CIFAR10. In NeuralODEs, the differentiable activation functions are mainly used, so we compared MoLU with GeLU, Mish, SiLU, ELU, Tanh, and in case of the classification, compared it with ReLU, Leaky ReLU and Tanh. We used α=2,β=2formulae-sequence𝛼2𝛽2\alpha=2,\beta=2italic_α = 2 , italic_β = 2. | We realized that our activation function is not only showing a good performance for the accuracy but also converging to zero rapidly when updating a loss function during a test on some mathematical model and neural networks. | We formulated our activation function in the following order. First, we used the hyperbolic tangent function as a basic framework, and then multiplied by the identity function to show the behavior of the identity function in the positive integer region. Lastly, we composited the exponential function to the hyperbolic tangent function to make it converge to zero asymptotically in the negative integer region. | A |
\rangle\rangle,\langle\langle 1\rangle,\langle 2,1\rangle\rangle\rangle⟨ ⟨ ⟨ 1 ⟩ , ⟨ 1 ⟩ ⟩ , ⟨ ⟨ 3 , 2 , 1 ⟩ ⟩ , ⟨ ⟨ 1 ⟩ , ⟨ 2 , 1 ⟩ ⟩ ⟩. | The set of functional digraphs over n𝑛nitalic_n vertices up to isomorphism can be generated with delay O(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) and using linear space. | We have described the first polynomial-delay generation algorithm for the class of functional digraphs, both connected and arbitrary, which proves that these classes of graphs can be generated with an O(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) delay and linear space.333A proof-of-concept implementation of the algorithms described in this paper, the funkdigen command-line tool, is also available [9]. | However, the class of functional digraphs does not seem to have been considered yet from the point of view of efficient generation algorithms. Here we first propose a O(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT )-delay, linear space algorithm for the generation of connected n𝑛nitalic_n-vertex functional digraphs (sequence A002861 on the OEIS [22]), based on an isomorphism code which avoids generating multiple isomorphic digraphs. This assumes the word RAM model with word size O(logn)𝑂𝑛O(\log n)italic_O ( roman_log italic_n ) [14]. | There is a O(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT )-delay and linear space algorithm generating all connected n𝑛nitalic_n-vertex functional digraphs. | B |
O(n−13/2)𝑂superscript𝑛132O(n^{-13/2})italic_O ( italic_n start_POSTSUPERSCRIPT - 13 / 2 end_POSTSUPERSCRIPT ) | 1.89×10−21.89superscript1021.89\times 10^{-2}1.89 × 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT | 1.12×10−11.12superscript1011.12\times 10^{-1}1.12 × 10 start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT | 1.86×10−21.86superscript1021.86\times 10^{-2}1.86 × 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT | 1.86×10−21.86superscript1021.86\times 10^{-2}1.86 × 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT | A |
Traditional approaches for processing 3-D signals, such as video data, typically employ 3-D CNNs that produce a single prediction per signal. However, these architectures are inherently complex, with a high number of parameters, and often require pre-training on large 3-D datasets to achieve satisfactory performance. Another common approach involves assigning the video-level label uniformly to each frame and then using CNN-RNN networks to train on these annotated frames. This approach assumes that the facial expression intensity is consistent across all frames, which may not be the case, as only a subset of frames might actually display the labeled intensity [24, 1, 23, 26, 31, 34, 39, 12]. | The extracted representations are then passed to the MRNN component, which consists of an RNN designed to capture temporal dependencies across the sequence of frames. To handle the varying lengths of input videos, a Mask layer is employed within the MRNN. This layer dynamically selects relevant RNN outputs based on the actual number of frames in the video, allowing the model to adapt to variable input lengths without compromising the integrity of the temporal information. The selected features are then passed through fully connected layers to produce the final intensity estimation for the entire video. | Traditional approaches for processing 3-D signals, such as video data, typically employ 3-D CNNs that produce a single prediction per signal. However, these architectures are inherently complex, with a high number of parameters, and often require pre-training on large 3-D datasets to achieve satisfactory performance. Another common approach involves assigning the video-level label uniformly to each frame and then using CNN-RNN networks to train on these annotated frames. This approach assumes that the facial expression intensity is consistent across all frames, which may not be the case, as only a subset of frames might actually display the labeled intensity [24, 1, 23, 26, 31, 34, 39, 12]. | Moreover, our approach addresses the challenge of variable-length input videos. Traditional methods often rely on ad-hoc strategies to manage varying numbers of frames, such as setting a fixed input length and either discarding excess frames (which risks losing critical information) or duplicating frames in shorter videos (which can bias the model towards repeated data). These strategies are not only suboptimal but also require empirical tuning for each specific dataset, limiting their generalizability and effectiveness. | Table 2 shows that our uni-modal non-ensemble learning MMA-MRNNet (that exploits only the visual information and does not employ any ensemble learning) outperforms all other methods by large margins (although some methods are multimodal ones or even ensembles). Let us also note that all baseline and state-of-the-art methods utilized the ad-hoc strategy of selecting fixed input length by removing or duplicating images within each video sequence. | C |
A conformity function is a mapping ρ:ℝd×𝒴×Ω→ℝ:𝜌→superscriptℝ𝑑𝒴Ωℝ\rho:\mathbb{R}^{d}\times\mathscr{Y}\times\Omega\to\mathbb{R}italic_ρ : blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT × script_Y × roman_Ω → blackboard_R such that ρ(x,y)=ρ(x,y,⋅)𝜌𝑥𝑦𝜌𝑥𝑦bold-⋅\rho(x,y)=\rho(x,y,\,\boldsymbol{\cdot}\;)italic_ρ ( italic_x , italic_y ) = italic_ρ ( italic_x , italic_y , bold_⋅ ) is 𝒯𝒯\mathscr{T}script_T-measurable for every x∈ℝd𝑥superscriptℝ𝑑x\in\mathbb{R}^{d}italic_x ∈ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT and every y∈𝒴𝑦𝒴y\in\mathscr{Y}italic_y ∈ script_Y. The sequence of conformity scores {Si}i≥1subscriptsubscript𝑆𝑖𝑖1\{S_{i}\}_{i\geq 1}{ italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i ≥ 1 end_POSTSUBSCRIPT associated with a conformity function ρ𝜌\rhoitalic_ρ is defined by Si(ω)=ρ(Xi(ω),Yi(ω),ω)subscript𝑆𝑖𝜔𝜌subscript𝑋𝑖𝜔subscript𝑌𝑖𝜔𝜔S_{i}(\omega)=\rho(X_{i}(\omega),Y_{i}(\omega),\omega)italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_ω ) = italic_ρ ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_ω ) , italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_ω ) , italic_ω ). We say that a conformity function ρ𝜌\rhoitalic_ρ is regular with respect to a specific data sequence if there are no ties among the corresponding conformity scores {Si}i≥1subscriptsubscript𝑆𝑖𝑖1\{S_{i}\}_{i\geq 1}{ italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i ≥ 1 end_POSTSUBSCRIPT almost surely. | Conformity functions are agnostic to the choice of the specific models or algorithms used to construct μ^^𝜇\hat{\mu}over^ start_ARG italic_μ end_ARG, ξ^psubscript^𝜉𝑝\hat{\xi}_{p}over^ start_ARG italic_ξ end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT, and π^^𝜋\hat{\pi}over^ start_ARG italic_π end_ARG in Example 1. The intuition is that the associated conformity scores measure the ability of the model to make accurate predictions on the calibration sample, whose information is not used in the model´s training process, and the assumed data sequence exchangeability transfers this assessment of the model’s predictive capacity from the calibration sample to the sequence of future observables. The following result is proved in the Appendix. | Note that the regularity of a specific conformity function ρ𝜌\rhoitalic_ρ is contextual, being inherently dependent on the distribution of the underlying data sequence. Technically, we can always avoid ties among the sequence of conformity scores almost surely by introducing a properly constructed ancillary tie-breaking sequence. | Under the data exchangeability assumption, the sequence of conformity scores {Si}i≥1subscriptsubscript𝑆𝑖𝑖1\{S_{i}\}_{i\geq 1}{ italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i ≥ 1 end_POSTSUBSCRIPT is exchangeable. | A conformity function is a mapping ρ:ℝd×𝒴×Ω→ℝ:𝜌→superscriptℝ𝑑𝒴Ωℝ\rho:\mathbb{R}^{d}\times\mathscr{Y}\times\Omega\to\mathbb{R}italic_ρ : blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT × script_Y × roman_Ω → blackboard_R such that ρ(x,y)=ρ(x,y,⋅)𝜌𝑥𝑦𝜌𝑥𝑦bold-⋅\rho(x,y)=\rho(x,y,\,\boldsymbol{\cdot}\;)italic_ρ ( italic_x , italic_y ) = italic_ρ ( italic_x , italic_y , bold_⋅ ) is 𝒯𝒯\mathscr{T}script_T-measurable for every x∈ℝd𝑥superscriptℝ𝑑x\in\mathbb{R}^{d}italic_x ∈ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT and every y∈𝒴𝑦𝒴y\in\mathscr{Y}italic_y ∈ script_Y. The sequence of conformity scores {Si}i≥1subscriptsubscript𝑆𝑖𝑖1\{S_{i}\}_{i\geq 1}{ italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i ≥ 1 end_POSTSUBSCRIPT associated with a conformity function ρ𝜌\rhoitalic_ρ is defined by Si(ω)=ρ(Xi(ω),Yi(ω),ω)subscript𝑆𝑖𝜔𝜌subscript𝑋𝑖𝜔subscript𝑌𝑖𝜔𝜔S_{i}(\omega)=\rho(X_{i}(\omega),Y_{i}(\omega),\omega)italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_ω ) = italic_ρ ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_ω ) , italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_ω ) , italic_ω ). We say that a conformity function ρ𝜌\rhoitalic_ρ is regular with respect to a specific data sequence if there are no ties among the corresponding conformity scores {Si}i≥1subscriptsubscript𝑆𝑖𝑖1\{S_{i}\}_{i\geq 1}{ italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i ≥ 1 end_POSTSUBSCRIPT almost surely. | B |
In this section, we evaluate the proposed EVSTr model on two event-based recognition tasks, including object classification in Section IV-A and action recognition in IV-B. Object classification effectively evaluates the algorithm’s ability to extract spatiotemporal features from short-duration event streams, while action recognition on long-duration streams assesses the method’s capability to model long-range temporal dependencies. To provide a more practical and convincing model evaluation, we present a new event-based action recognition dataset recorded in challenging visual scenarios. Besides, a detailed ablation study is provided in Section IV-C to analyze the effectiveness of different designs. | We validate our method on four representative event-based object classification datasets: N-Caltech101 [42], CIFAR10-DVS [43], N-Cars [44] and ASL-DVS [16]. N-Caltech101 records the RGB images using a moving event camera. CIFAR10-DVS records the moving RGB images displayed on a monitor via a still event camera. Instead, N-Cars and ASL-DVS uses event cameras to record real-world objects. We train our model on each training set separately and evaluate its performance on the testing sets. For N-Caltech101, CIFAR10-DVS, and ASL-DVS without official splitting, we follow the settings in [16, 11] to randomly select 20%percent\%% of data for testing, and the rest is used for training. | We evaluate the proposed method on five action recognition datasets, including UCF101-DVS, HMDB51-DVS, DvsGesture, DailyAction, and newly presented NeuroHAR (only using the event modality). For UCF101-DVS and HMDB51-DVS, substreams of 2000 ms are randomly cut for training and evaluation. We adopt the pre-processing method in [15, 12] to cut 800 ms clips of DvsGesture as input. For DailyAction and NeuroHAR, the entire event stream is fed into our model. We train EVSTr separately on each training set and evaluate it on the testing sets. For UCF101-DVS, HMDB51-DVS, and DailyAction without a training/testing splitting, we follow the settings in [16] to randomly select 20%percent\%% of data for testing, and the remaining samples are used for training. | For N-Caltech101, N-Cars, and ASL-DVS, the entire event stream is converted into the event voxel set representation (Section III-A) as input of our model. For CIFAR10-DVS, substreams of 200 ms are randomly cut for representation during training and evaluation. We fix the compensation coefficient T𝑇Titalic_T as 4 for all datasets. Considering the different spatiotemporal sizes of the datasets, we set the size (Hv,Wv,Tv)subscript𝐻vsubscript𝑊vsubscript𝑇v(H_{\rm v},W_{\rm v},T_{\rm v})( italic_H start_POSTSUBSCRIPT roman_v end_POSTSUBSCRIPT , italic_W start_POSTSUBSCRIPT roman_v end_POSTSUBSCRIPT , italic_T start_POSTSUBSCRIPT roman_v end_POSTSUBSCRIPT ) of event voxels as follows: (5, 5, 1) for N-Cars; (10, 10, 1) for other three datasets. We set the number Nvsubscript𝑁vN_{\rm v}italic_N start_POSTSUBSCRIPT roman_v end_POSTSUBSCRIPT of input voxels as: 512 for CIFAR10-DVS, N-Cars, and ASL-DVS; 1024 for N-Caltech101. Moreover, the value Dfsubscript𝐷fD_{\rm f}italic_D start_POSTSUBSCRIPT roman_f end_POSTSUBSCRIPT of MLP for voxel feature encoding is fixed as 32 in all experiments. | Compared to dense frame-based methods using pre-trained models, our method still obtains a competitive performance on ASL-DVS, N-Cars, and CIFAR10-DVS without utilizing prior knowledge from the image domain. When training them from scratch for fair competition, EVSTr achieves higher accuracy than them on all datasets. Besides, our model outperforms the sparse frame-based method AsyNet [6] while maintaining lower model complexity. A detailed comparison of model complexity is presented in the following part. | A |
In haptic exploration methods, a common but unrealistic assumption is that objects are fixed to the surface (also in [1]). Objects naturally move when they are touched and their pose needs to be re-estimated. Many existing pose estimation methods require prior knowledge of the objects at the instance level [37, 38] or category level [39, 40]. We seek methods that work with unknown arbitrary objects. Having segmented point clouds of each object at hand, we chose a simple and computationally cheap (no GPU) solution using Iterative Closest Point (ICP) [41]. Alternative solutions for unknown objects are [42, 43]. | The algorithm is described in detail in Section III-H. The following sections detail individual modules required by the pipeline. | We will first describe the module for the shape creation itself. In [1] the IGR network was used as a standalone library. To perform more efficiently and to be able to handle more objects at once, we modified it to be more compatible with the whole ecosystem (under Robot Operating System (ROS)). The module contains the input point clouds, latent vectors, and other parameters for each object in the scene, allowing simple switching between objects without excessive overhead. The next object to be completed is selected through messages sent from the main script. If a new request is received and reconstruction is running, the new objects are placed in a queue. The module runs in the background, which allowed a considerable speed-up of the whole process, as now reconstructions are processed while the robot is moving. The basic operation is shown in Alg. 1. First, a new shape is selected from a queue (if it is not empty, otherwise the module waits for a new request). Then the latent vector 𝐳𝐳\mathbf{z}bold_z and the input point clouds 𝒳𝒳\mathcal{X}caligraphic_X for the given shape are loaded. If the shape is new, the first latent code is created randomly with a normal distribution. Otherwise, the last known vector for the given object is used. The current 𝐳𝐳\mathbf{z}bold_z is optimized with the loss from Eq. 3. Finally, the shape O𝑂Oitalic_O is created, together with the uncertainty computed with Eq. 8. | The main Alg. 2 starts with capturing the initial visual information (box (1) in Fig. 1, line 5 in Alg. 2). An initial transformation 𝐑0subscript𝐑0\mathbf{R}_{0}bold_R start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT of the object in the base frame of the robot is obtained. The information is then segmented and a point cloud is created for each object in the scene (box (2), line 6). The segmentation itself is described in III-F. | We present the algorithm of our method in Alg. 2 and the same is depicted in Fig. 1. The algorithm is high-level pseudocode, with a module for shape completion Alg. 1 described in more detail. | D |
This approach based on ideals allows us to recover well-known topologies on ABsuperscript𝐴𝐵A^{B}italic_A start_POSTSUPERSCRIPT italic_B end_POSTSUPERSCRIPT, such as the topology of pointwise convergence (referred to as the local topology in this work) and the uniform topology (refer to [17], Section 19). | Infinitary ω𝜔\omegaitalic_ω-clones have been mainly studied with respect to both local topology and global topology. However, to extend the previous results to ω𝜔\omegaitalic_ω-clones that are not necessarily infinitary, we require a new concept of polymorphism. | The following theorem provides additional informations on locally closed infinitary ω𝜔\omegaitalic_ω-clones. Recall from Notation 6.13 the definition of Polω𝑃𝑜superscript𝑙𝜔Pol^{\omega}italic_P italic_o italic_l start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT. | Furthermore, it provides a convenient framework for defining new topologies, particularly for studying (infinitary) ω𝜔\omegaitalic_ω-clones. | Furthermore, infinitary ω𝜔\omegaitalic_ω-clones naturally extend the concept of clones, as every clone can be encoded into an appropriate infinitary ω𝜔\omegaitalic_ω-clone. | C |
An illustrating example is the so-called synthesis problem in the field of system identification, where (under special conditions) the Minimum s𝑠sitalic_s-t𝑡titalic_t Cut problem can be used to determine an optimal placement of input and output signals in a physical system (modeled as a directed graph) to gather information about its behavior [SCV22]. | We now briefly motivate why finding diverse minimum s𝑠sitalic_s-t𝑡titalic_t cuts in a graph can be of interest. In general, to solve a real-world problem, one typically formulates the problem as an instance of a computational problem and proceeds to find a solution with the help of an optimization algorithm. However, this is not always an easy task, and the abstraction to a mathematical formulation is usually just a simplification. From a theoretical perspective, an optimal solution to the simplified problem is as good as any other optimal solution, but due to the loss of information during the abstraction process, not every such solution is guaranteed to be adequate for practical usage. | An optimal placement obtained from the abstract model, however, is not always practically feasible due to omitted physical constraints of the system that would otherwise render the model unmanageable [SCV21]. | An illustrating example is the so-called synthesis problem in the field of system identification, where (under special conditions) the Minimum s𝑠sitalic_s-t𝑡titalic_t Cut problem can be used to determine an optimal placement of input and output signals in a physical system (modeled as a directed graph) to gather information about its behavior [SCV22]. | One way of dealing with this issue is to present all optimal solutions of the simplified model and let a user choose between them based on external factors ignored by the mathematical model. Such an approach is useful when the number of optimal solutions is small, but in most cases (as in the Minimum s𝑠sitalic_s-t𝑡titalic_t Cut problem) the number of optimal solutions can be exponential in the input size, rendering the approach infeasible. Another approach is to present only a small number k𝑘kitalic_k of optimal solutions, but one should be careful not to output solutions that are very similar to each other, as a solution resembling a practically infeasible solution is likely to be practically infeasible as well. Thus, we would like to somehow obtain a small list of k𝑘kitalic_k optimal, yet sufficiently “diverse” solutions from which a user can make a choice a posteriori. | B |
We define computable probability measures, m=ℳ/ℳ(X)𝑚ℳℳ𝑋m=\mathcal{M}/\mathcal{M}(X)italic_m = caligraphic_M / caligraphic_M ( italic_X ) and r=ℛ/ℛ(X)𝑟ℛℛ𝑋r=\mathcal{R}/\mathcal{R}(X)italic_r = caligraphic_R / caligraphic_R ( italic_X ).Then | ℐ(𝒫:𝒬)\displaystyle\mathcal{I}(\mathcal{P}:\mathcal{Q})caligraphic_I ( caligraphic_P : caligraphic_Q ) | ℐ(𝒫:𝒬)<+\displaystyle\mathcal{I}(\mathcal{P}:\mathcal{Q})<^{+}caligraphic_I ( caligraphic_P : caligraphic_Q ) < start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT | 2ℐ(𝒫:𝒬)superscript2ℐ:𝒫𝒬\displaystyle 2^{\mathcal{I}(\mathcal{P}:\mathcal{Q})}2 start_POSTSUPERSCRIPT caligraphic_I ( caligraphic_P : caligraphic_Q ) end_POSTSUPERSCRIPT | ℐ(𝒫:𝒬)\displaystyle\mathcal{I}(\mathcal{P}:\mathcal{Q})caligraphic_I ( caligraphic_P : caligraphic_Q ) | A |
On convolutional neural networks, this method allows Mixture Normalization to achieve better results than batch normalization in terms of convergence and accuracy in supervised learning tasks. | In this perspective, we propose a novel normalization technique called context normalization (CN). In fact, assuming that the data are well modeled by a mixture of several components, each sample in the mini-batch is normalized using the mean and variance of the associated component. Indeed, the capability of GMM to approximate any continuous distribution with arbitrary precision has been demonstrated by [7]. Building upon this foundation, our paper follows a similar track but introduces a novel method. In particular, we define a context that can come from various sources that describe the structure of the dataset. A context can be conceptualized as a coherent cluster of samples that share common characteristics and can be effectively grouped together. Each context can be viewed as a component of the Gaussian mixture with its own probability density function. By normalizing samples from the same context with the parameters learned during backpropagation, CN allows an estimation of the mean and variance of each mixture component thus improving the discrimination power of the data representation according to the target task. | CN transform is a differentiable operation in deep neural networks that normalizes input data. By applying CN, the model can continuously learn from input distributions and adapt its representations to the target task, leading to improved performance. This normalization helps mitigate the influence of variations in input distributions, allowing the model to focus on relevant patterns and features. The differentiability of CN enables efficient gradient flow during training, facilitating parameter updates and learning from the normalized data while preserving differentiation through the normalization process. Overall, CN plays a vital role in enhancing model performance by promoting effective learning and adaptability through data normalization. It demonstrates higher flexibility compared to MN due to its ability to establish consistent data grouping based on provided contexts, without the need for additional algorithms. This is advantageous over MN since the Expectation-Maximization (EM) algorithm employed in MN can exhibit slower convergence. In the specific case of classifying dog images, where data scarcity is a challenge, the method addresses this issue by partitioning the dog class into subclasses. This approach enables the acquisition of specific features applicable to all dogs, facilitating the normalization of images within the dog superclass and creating a more coherent and easily learnable feature space. Importantly, the context identifier used for learning the normalization parameters is unrelated to the images themselves. Instead, it can be viewed as noise, contributing to the regularization of the deep neural network during training, similar to techniques like dropout, thereby enhancing the generalization performance of the model [11]. | We have proposed a novel approach called ”context normalization” (CN) that enhances deep neural network training in terms of training stability, fast convergence, higher learning rate, and viable activation functions. Similar to the conventional mixture normalization (MN) method, our approach is driven by the hypothesis that any continuous function can be approximated in some sense by a weighted sum of Gaussian distributions with finite mean vectors and covariance matrices. In other words, our methodology assumes that the data distribution is a mixture of Gaussian models. However, unlike the mixture normalization technique that invokes the expectation maximization (EM) algorithms to estimate the Gaussian components parameters, our proposed methodology relies on the notion of concept that represents a cluster of related data. In fact, a supervised deep neural network is built and trained in order to learn the Gaussian components parameters. Once these optimal values are determined after convergence, they are utilized during the CN procedure performed on a deep neural network activation layer. CN alleviates the slow estimation of Gaussian component parameters inherent to EM in the scenario of large datasets. Furthermore, unlike MN, CN provides non linear decision boundaries between context which reflects more reality. Our experimental results demonstrate the superiority of context normalization over batch normalization and mixture normalization, showcasing enhanced convergence and generalization performance. The proposed method, when applied specifically to images, introduces CN-Channels and CN-Patches for training, and CN and CN+ for inference. With its flexibility to adapt various representations and tasks, context normalization proves to be a valuable tool in some application such as image classification. | Based on the Mixture Normalization (MN) hypothesis proposed by [6] (ref. to Figure 1), our Context Normalization (CN) approach operates under a similar assumption that data can be effectively represented by a mixture of multiple components, as opposed to batch normalization (BN) [4]). In the Context Normalization (CN) approach, a fundamental concept is introduced, namely, the notion of context, which represents a cluster of samples sharing common characteristics that can be efficiently grouped together. Unlike the Expectation-Maximization (EM) algorithm [10] typically employed for parameters estimation in each component, CN utilizes a deep neural network to learn these parameters through context-based normalization. | D |
OOD detection approaches are designed to address this problem, which aim to detect and reject these OOD samples while guaranteeing the classification of in-distribution data (Hendrycks and Gimpel, 2017). | The pixel-level foreground and background features learned in the dense prediction network cannot be applied directly to the image classification task. We show below that the (K+1)𝐾1(K+1)( italic_K + 1 )-class dense prediction network can be transformed to a (K+1)𝐾1(K+1)( italic_K + 1 )-class image classification network in a lossless fashion: the dense prediction and the classification networks share the same weight parameters, and the classification network can be applied to image classification without re-training. | There are generally two groups of OOD detection approaches. One of them are post-hoc approaches that work with a trained classification network to derive OOD scores without re-training or fine-tuning of the network, | All methods are based on ID training data without using any external outlier data. † indicates that the results are taken from the original paper, and other methods are reproduced using the same network architecture. Four post-hoc foreground OOD detection methods are respectively plugged into our method ‘X’-DFB, where improved results are highlighted in red and they are in blue otherwise. The best result per dataset is boldfaced. | There are generally two types of post-hoc OOD detection approaches, including raw logit-based and softmax probability-based methods. Our background-based OOD score is based on an unbounded logit value, which can dominant the overall OOD score when combining with the foreground-based OOD score using the softmax output (its value is within [0,1]01[0,1][ 0 , 1 ]). To avoid this situation, we take a different approach to combine the foreground and background-based OOD scores, depending on the type of the foreground-based OOD detector used: | B |
We demonstrate that our DM improves existing state-of-the-art LLIE techniques on popular low-light datasets including challenging unpaired test sets. | In this paper, we present a framework for post-processing images which have undergone low-light image enhancement. The enhancement of low-light images often reveals a variety of degradations which are hidden in the dark, and thus a need for post-processing is introduced. Furthermore, each low-light enhancement technique can possibly introduce a different form of degradation into its result. We propose using a conditional diffusion model in order to model the distribution between under-exposed and normally-exposed images. Further, we introduce a method of applying the diffusion model as a post-processing technique. Our approach uses the diffusion model to estimate the amount of noise present in an enhanced image in one pass through the model, which can simply be subtracted from the enhanced image to further enhance the image. Moreover, we demonstrate that our approach outperforms competing post-processing denoisers, and we demonstrate its versatility on a variety of low-light datasets with different state-of-the-art low-light image enhancement backbones. In contrast to existing denoisers, we find that our approach is able to improve perceptual quality, while removing noise and other distortions. In future work, our approach could potentially be applied to other image restoration domains. | The experimental results in Table III show that LPDM significantly outperforms ULPDM across all metrics. Therefore, we conclude that conditioning is necessary in order for the LPDM to detect the wide variety of artifacts that can be present in 𝒙^0ηsuperscriptsubscriptbold-^𝒙0𝜂\bm{\hat{x}}_{0}^{\eta}overbold_^ start_ARG bold_italic_x end_ARG start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_η end_POSTSUPERSCRIPT. We provide visual results in Fig. 7 which verify our conclusion: ULPDM is able to remove noise, however results are oversmoothed and thus detail is lost due to lack of conditioning. | Notably, NAFNet only performs better than LPDM for SSIM on the noise introduced by the LIME method due to the type of noise being similar to most denoising datasets. In contrast, our method models the conditional distribution between low-light and normally-exposed images, and thus the LPDM can handle a variety of different artifacts and color distortions besides typical noise. An example of a distortion which differs from typical Gaussian noise is the distortion introduced by KinD++. Row three of Fig. 5 displays how our LPDM increases the sharpness of 𝒙^0KinD++superscriptsubscriptbold-^𝒙0KinD++\bm{\hat{x}}_{0}^{\textnormal{KinD++}}overbold_^ start_ARG bold_italic_x end_ARG start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT KinD++ end_POSTSUPERSCRIPT where the other denoisers yield oversmoothed results. We emphasize this point because different LLIE methods introduce a panoply of different distortions. | In addition to simple denoising, we demonstrate that our method is able to cope with a variety of different artifacts and color distortions, yielding superior results to existing denoisers for LLIE. | D |
Our study extends previous research on individual differences in spatial navigation to navigation in knowledge space. An intuitive next research phase could involve constructing mathematical models that integrate personal traits to elucidate participants’ navigation behavior. Additionally, exploring whether and how navigation experiences can be enhanced for individuals with specific characteristics in future experiments is a viable avenue for investigation. | In the game sessions, players are given two Wikipedia pages as the source and the target in each game. To reduce the disparities in prior knowledge among the participants, the source and target pages are chosen to be similarly distanced (2 or 3 steps away on the Wikipedia network) pages about renowned individuals from various domains such as artists, directors, scientists, and politicians, spanning different historical periods and encompassing both genders. The players start from the source page and navigate to the target page by clicking on the hyperlinks to other Wikipedia articles on the page. To win each game, they should reach the target page in at most 7 steps (Least-click game) or within 150 seconds (Speed-race game). Each participant plays nine rounds of games grouped into three sessions with a one-minute break between the sessions. After the game sessions, participants first finished a 50-question Big Five personality test (https://openpsychometrics.org/tests/IPIP-BFFM/) measuring their five personality traits: openness to experience, conscientiousness, extroversion, agreeableness, and neuroticism. To control other factors that may affect navigation performance, we then asked six groups of questions about their i) employment status, ii) education background, iii) spatial navigation habit, and their prior experience with iv) the Wikipedia navigation game, v) the Wikipedia website and vi) computer games. Lastly, we asked participants demographic questions about their age, gender, ethnicity, political position, and language skills. See the Supplementary Material for a complete list of the questions in the survey. One of the games with the source page "Alexander the Great" and target page "Tim Burton" turned out to be much more difficult than the other games (>3σabsent3𝜎>3\sigma> 3 italic_σ), and is therefore counted as an outlier and excluded from our analysis. After the exclusion, the eight rounds of navigation tasks reached a Cronbach’s alpha score of 0.76, indicating fair internal reliability of the navigation task. | Encoding the participants’ answers to the questions in the survey (see encoding details in the Supplementary Material), we end up with 18 control variables characterizing the participants by the six groups of questions specified above, 5 control variables indicating the game, game type (Speed-race or Least-clicks), round number of the game and participants’ familiarity of the source and target Wikipedia articles of the game played by each participant. In addition, we adopted 11 independent variables describing the participants’ big five personality traits, age, gender, ethnic background, political position, and foreign/native language skills. To reduce the strong correlation and anti-correlation present among the control variables, we conducted a principal components analysis (PCA) [43] in each question group and summarized 80% of the variance by a reduced set of variables (principal components). The final list of the 13 control variables and their respective loadings from the original variables are shown in Table 2. Descriptive statistics of the participants’ characteristics can be found in Supplementary Table S1111 in the Supplementary Material. As shown, male participants in our experiment are, on average, younger and less liberal, with a more varied ethnic background. They are also more likely to speak a foreign language and have prior experience with the Wikipedia navigation game. Female participants prefer to play the navigation game without time constraints (Least-clicks game), whereas males tend to race for speed (Speed-race game). Regarding the Big Five personality score, we did not observe big differences between male and female participants (Maximum t value = 1.75). | To investigate the impact of individual characteristics on navigation success and creativity, we employed four regression models. For navigation success, we conducted separate logistic regression analyses for games with time and distance constraints. The dependent variable was the binary measure snisuperscriptsubscript𝑠𝑛𝑖s_{n}^{i}italic_s start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT of successful or unsuccessful navigation for the n𝑛nitalic_nth participant in the i𝑖iitalic_ith game. Creativity in navigation was assessed using linear regression for each game type, with the standardized uniqueness score as the dependent variable. Ethnicity was represented as two binary variables indicating Asian/African American identity and political orientation was captured as a binary variable indicating liberal stance, because being Asian/African American and liberal are significant (p < 0.01) predictors of navigation performance while other categories of ethnicity and political positions are not. As control variables, we included a dummy variable representing the index of the eight games to account for differing difficulties, and a numeric variable indicating the order of the game to control for attention changes during the experiment. The final regression results are presented in Table 1 (the dummy variables indicating the game index were not shown in the table for visualization simplicity. For the full regression results, see Supplementary Table S4444). The correlation among predictors significantly associated (p < 0.01) with navigation performance is illustrated in Supplementary Figure 1111, and the variance inflation factors (VIF) of all the independent variables are shown in Supplementary Table S5555. The low VIF values (Max=2.38) indicate that the collinearity issue of our model is negligible. To test if the main effects of navigation performance in our models are still valid when interactions among the independent variables are considered, we conducted two extra logistic regressions where the interactions are included (see Supplementary Table S4444 for details on how the interaction terms were selected). The regression results and the VIFs of the independent variables for all the logistic regression models are shown in Supplementary Tables S4444-5555. As demonstrated, the significant predictors of navigation performance maintain significance in both Speed-race and Least-clicks games after the inclusion of interaction terms (except Wikipedia1 for the Least-clicks games, which remains significant at p < 0.05 after introducing the interaction terms). While certain variables, like employment status, computer games proficiency, and interaction terms, have achieved significance, their influence on the main effects observed is minimal. Therefore our primary focus in this study centers on the main effects of personal characteristics. To assess the different impact of each predictor on navigation performance, we also conducted a series of logistic regressions for Speed-race and Least-clicks games respectively where we added predictors one by one and presented the regression outcomes in Supplementary Table S2−3232-32 - 3. | We conducted an online experiment where we hired 445 participants (397 participants after removing participants who did not finish the experiment or did not pass the attention check, and dropping data that had recording errors) from the United States on the online crowdsourcing platform Prolific (https://www.prolific.co/) to play nine rounds of the Wikipedia navigation game and fill in a survey on the survey platform Qualtrics (https://www.qualtrics.com/uk/). At the end of the experiment, each participant received a fixed rate base payment of 5 pounds and a bonus payment of 0.5 pounds for each game they won. To get a balanced population, we applied the following prescreening conditions: i) participants are from the United States, ii) an equal number of female and male participants, iii) participants with White, Asian, Hispanic, and African ethnicity consist ∼similar-to\sim∼50%, ∼similar-to\sim∼17%, ∼similar-to\sim∼17% and ∼similar-to\sim∼17% of the sample respectively. | D |
The fast simulation options do not modify the traditional data processing flow described in Figure 1 (top), but rather allow to speed up the simulation phase up to a factor 20 with respect to the detailed simulation. | A more radical approach is the one followed by the ultra-fast simulation strategies which aim to parameterize directly the high-level response of the LHCb detector [11, 14]. | As mentioned in the previous Section, the validation of the ultra-fast philosophy of Lamarr is based on the comparison between the distributions obtained from models trained on detailed simulation and the ones resulting from standard simulation strategies. | The high-level response of the RICH and MUON systems are reproduced using the particles kinematic information provided by the Lamarr tracking modules and a description of the detector occupancy, for example based on the total number of tracks traversing the detector. | These options offer cheaper alternative solutions to reproduce the low-level response of the LHCb detector and are typically named fast simulation strategies. | A |
Reflecting on the contrastive alignment loss employed during pretraining, it becomes evident that the loss encompasses both structure-to-sequence and sequence-to-structure alignment calculations. The intermediate state score computations offer a direct means to evaluate the multi-modality alignment level. | Internal evaluation across test sets. Group 1 showcases the contact map predictions, measured by P@* scores. Group 2 focuses on retrieval alignment evaluations, quantified by alignment accuracy and KL distance. The acc1 and acc2 metrics denote the accuracy of structure-to-sequence and sequence-to-structure alignment. | Group 1 of Table 3 showcases the contact map prediction scores, evaluated across the CATH, trRosetta, Ts50/Ts500, and CASP14 test sets. Both the residue-level and protein-level pretrained models demonstrate high P@L accuracy in predicting contact maps across all datasets, indicating that the pretrained structure module has acquired rich structural representations. Notably, the residue-level evaluation exhibits superior performance within Group 1, likely attributable to its finer granularity. | Table 2: Comparison among our protein design models (#2) and baselines (#1). The best results are bolded, followed by underlined. Designp: Protein-level pretrained model; Designr: Residue-level pretrained model. | The retrieval alignment evaluation on the CATH, trRosetta, Ts50/Ts500, and CASP14 test sets is presented in Group 2 of Table 3. Additionally, we provide residue-level and protein-level results for comprehensive analysis. It’s noteworthy that the protein-level pretrained model exhibits a higher ease in aligning sequences and structures, evident through higher accuracy and lower KL distance, which aligns well with our intuition. | D |
We evaluate the effectiveness by integrating LiftNet with four strong KGE methods and running experiments on three knowledge graph datasets of different sizes. The results show that by integrating with LiftNet, conventional KGE methods only require 16-dimensional entity representations to achieve link prediction accuracy comparable to original models of 512-dimensional, saving 68.4%percent68.468.4\%68.4 % to 96.9%percent96.996.9\%96.9 % model parameters. | We choose a set of strong conventional KGE models to show the effectiveness of the proposed LiftNet method. That includes TransE, TransH, DistMult, and ComplEx. TransE and TransH are translational models that adopt distance measurements for related entities and their relations. DistMult and ComplEx aim at semantic matching and adopt tensor decomposition in real and complex spaces, respectively. Their scoring functions are summarized in Table IV. | KGE methods learn vector representations for knowledge graphs, and we roughly categorize them into three types. First, distance-based methods describe a fact with mathematical operations, e.g., TransE defines a relation as the translation. To better model N-N relations, TransH [14] and STransE [15] project entities to relation-aware subspace with hyperplanes and matrices, respectively. Operations in the complex space [16] or the polar coordinate system [17] are also introduced to improve flexibility. | We refrain applying LiftNet to relation to accommodate conventional KGE models that design relation as other type of operations, e.g., translation on hyperplanes [14]. | Third, deep learning methods adopt deep neural networks to capture the complex relationships of entities and relations. ConvE [22], and CapsE [23] learn the complex interactions between entities and relations through convolutional layers and capsule layers, respectively. CompGCN [24] leverages GCN layers with entity-relation composition operations to capture interactions among entities and relations. | B |
Unfortunately, to the best of our knowledge, our problem cannot be formulated as a linear programming one. This represents the biggest drawback of using linear programming for entanglement routing: the amount of detail one can add becomes restricted by the need to formulate the problem as a linear optimization. Nevertheless, the proposed approach allows for the addition of as much detail as needed, provided that monotonic and isotonic routing metrics can still be defined. An interesting way to merge these two directions would be to reformulate this work as a non-linear programming problem. | This paper focused on multipartite entanglement distribution for a quantum network connected through links that exhibit a trade-off between entanglement generation rate and fidelity. This is the case with hash-based quantum purification protocols [11] and with photonic models [12]. Two entanglement distribution models were considered: one where only one ebit is sent at each time epoch, and a second so-called flow model, where a large number of ebits are distributed simultaneously. The paper proposed using fidelity curves as a routing metric in both scenarios in combination with a multi-objective optimization algorithm, which finds the best path (or best star) connecting two (or three) nodes in close to linear time. The proposed method can be readily adapted to address routing challenges in various quantum network models, including those incorporating purification protocols between adjacent nodes. Nevertheless, how to deal with multi-path routing with non-deterministic swapping is still an open problem. In conclusion, this work paves the way for entanglement distribution in networks with complex link models, incorporating highly efficient purification protocols, and enabling optimization of quantum routing in more realistic and sophisticated network scenarios. | In its present form, the proposed technique cannot be used for entanglement distribution flow models considering non-deterministic swapping protocols because in that case the distribution rate will depend on the swapping order [7], making the problem considerably harder. Nevertheless, the proposed approach remains an extremely versatile tool that can be used in combination with relatively complex quantum network models. | Ghaderibaneh et al. [7] focused on a bipartite entanglement distribution model with a non-deterministic swap. As a consequence of that, the order in which each of the swapping operations is applied impacts the rate of entanglement distribution. They proposed a polynomial-time algorithm to find both the optimal path and the optimal swapping order. | Pirandola et al.[4] looked at bipartite entanglement networks based on the theoretical upper bounds for the channel capacity. In a regime where entanglement distribution is close to its theoretical upper bound, that work showed that in such a regime Dijkstra’s algorithm can be used to find the path that maximizes the entanglement distribution rate between nodes, and the max-flow min-cut theorem can be used to find the maximum rate between two nodes using multiple-path entanglement distribution. Both problems can be solved in polynomial time, and this approach was later generalized to include the multipartite entanglement distribution of GHZ-states[9]. | B |
The widely used design principle is to minimize the BLER under the SC decoding. The Bhattacharyya parameter [1] is the first construction method to precisely calculate the mutual information of each synthetic channel in the binary erasure channel (BEC). | For the binary-input additive white Gaussian noise (BI-AWGN) channel, the Gaussian approximation (GA) algorithm [7, 8] approximates the probability distribution of the LLR as Gaussian distribution and gives reliability evaluation with limited complexity. | For fading channels, the polar spectrum [9, 10] is proposed to derive the upper bound of the error probability of the synthetic channels to construct polar codes. | For other B-DMCs, the density evolution (DE) algorithm, initially proposed in [5] and improved in [6], tracks the probability distribution of the logarithmic likelihood ratio (LLR) of each synthetic channel and provides theoretical guarantee on the estimation accuracy with a high computational cost. | We introduce a new concept on the synthetic channels of polar codes, named partial MWD, which is used to evaluate the influence of each synthetic channel on the MWD when the information bit is transmitted in the synthetic channel. | C |
This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-23-1-0556. | We first give some basic definitions that will enable us to state our main results on the value of the game G(Q,O,p)𝐺𝑄𝑂𝑝G(Q,O,p)italic_G ( italic_Q , italic_O , italic_p ). | The authors would also like to acknowledge the Lorentz Center at Leiden University, since some of the results were obtained at a Workshop on Search Games organized by the Lorentz Center. | Other investigations considered search at nodes of a lattice (Zoroa et al., 2013) and costs for searching at nodes (Baston and Kikuta, 2015). The requirement to bring the target back to the root after capture (find-and-fetch) was considered by Alpern (2011). Angelopoulos (2020) considered the linear search problem in the setting where the Searcher has a “hint” about the location of the target. For general discussions of search games, see Alpern and Gal (2003), Garnaev (2000) and Hohzaki (2016). | We end this section by showing that the value of the game is non-increasing in p𝑝pitalic_p. This is intuitively obvious, since the higher p𝑝pitalic_p is, the more reliable the signal is, so one would expect the search time to go down. | B |
100 negative and 100 positive patches for the training set. We use the official validation and testing splits of 32,768 cases each. All images are upsampled to 512×512512512512\times 512512 × 512 px. | Table 1: FID (↓↓\downarrow↓) and MFID (↓↓\downarrow↓) scores for embeddings generated with a varying number of sampling steps, CFG scale, embedding size, and a number of training cases. All settings are varied against 100 steps, CFG scale 2, embedding size 64, and 100 training cases. | For the image with all three diseases, we gave each embedding a strength of 0.5 and found that increasing the CFG scale to 3 works better. | We experiment with the number of sampling steps, the CFG scale, the number of images used to train embeddings, and the embedding vector size. | \tablereftab:inference_settings shows the FID and MFID scores after varying the number of sampling steps, CFG scale, embedding size, and the number of training cases relative to our final configuration used in the remainder of the paper: embedding size of 64 vectors per token, 100 cases per class, 100 sampling steps and a CFG scale of 2. | D |
Our architecture advances scene reconstruction by providing an intuitive interface for layout manipulation. This capability is crucial for the reconfiguration of scene elements into novel scenes, as depicted in Fig. 3. Here, the input panel allows for adjustments in the attributes of bounding boxes, such as modifying the position and scale of the ’apple’ bounding box prior to composition. The refinement process further involves sampling ray-box intervals from the global frame, leading to transformed coordinates with the corresponding ray samples that are then incorporated into the pipeline, as demonstrated in Fig. 5. | Our architecture advances scene reconstruction by providing an intuitive interface for layout manipulation. This capability is crucial for the reconfiguration of scene elements into novel scenes, as depicted in Fig. 3. Here, the input panel allows for adjustments in the attributes of bounding boxes, such as modifying the position and scale of the ’apple’ bounding box prior to composition. The refinement process further involves sampling ray-box intervals from the global frame, leading to transformed coordinates with the corresponding ray samples that are then incorporated into the pipeline, as demonstrated in Fig. 5. | the potential degradation in specific elements, such as the ’wine’, which worsens as training progresses, as evidenced by the local frames’ comparison in the upper left corner. Concurrently, the global rendering depicts the ’wine’ as nearly imperceptible. This deterioration hints at the possibility that continued optimization may inadvertently diminish the representation of certain objects. | Each bounding box represents a NeRF, providing the flexibility to move, scale, or replace elements as needed. CompoNeRF’s capabilities also extend to textual edits, exemplified by the transformation of ’wine’ into ’juice’. | CompoNeRF is designed to composite multiple NeRFs to reconstruct scenes featuring multiple objects, utilizing guidance from both bounding boxes and textual prompts. Within our framework, depicted in Fig. 3, the Axis-Aligned Bounding Box (AABB) ray intersection test algorithm is applied to ascertain intersections across each box in the global frame. Subsequently, we sample points 𝒙gsubscript𝒙𝑔\boldsymbol{x}_{g}bold_italic_x start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT within the intervals of the ray-box and project them to 𝒙lsubscript𝒙𝑙\boldsymbol{x}_{l}bold_italic_x start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT to deduce the corresponding color 𝑪lsubscript𝑪𝑙\boldsymbol{C}_{l}bold_italic_C start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT and density 𝝈lsubscript𝝈𝑙\boldsymbol{\sigma}_{l}bold_italic_σ start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT within individual NeRF models. | C |
Note that our formulation of minimax classification error explicitly does not take into account class probabilities for the clusters. Equation (2) depends only on the class-conditional probabilities. | Sample the number njsubscript𝑛𝑗n_{j}italic_n start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT of data points per cluster, based on the extent of class imbalance specified by the archetype. | Our reasoning is that the underlying reality of each cluster depends on its class-conditional probability distribution, not the class probability. | Our data generator repliclust is based on data set archetypes. A data set archetype is a high-level description of the overall geometry of a data set with clusters. For example, the class of all data sets with “three oblong and slightly overlapping clusters in two dimensions with some class imbalance” is a data set archetype. | Note that our formulation of minimax classification error explicitly does not take into account class probabilities for the clusters. Equation (2) depends only on the class-conditional probabilities. | B |
In order to demonstrate the ability of our model to select event candidates, we analyze the results of two instances selected from the test set. For comparison, we select COFFEE without ranking and TANL, given its high performance. As shown in Table 3, our proposed model successfully extracts the missing events not detected by the baselines. The re-ranking mechanism enables the model to select more accurate candidates. | In order to demonstrate the ability of our model to select event candidates, we analyze the results of two instances selected from the test set. For comparison, we select COFFEE without ranking and TANL, given its high performance. As shown in Table 3, our proposed model successfully extracts the missing events not detected by the baselines. The re-ranking mechanism enables the model to select more accurate candidates. | Table 3: Event extraction examples from the test set using COFFEE, COFFEE without ranking and TANL+COFFEE. The triggers and arguments missed by the baselines but captured by COFFEE are highlighted. It is evident that COFFEE is generally more effective in detecting the events. | In particular, only COFFEE successfully predicts all the events within the context. In Example 1, both TANL and COFFEE without ranking fail to extract E1, triggered by ‘pay’, suggesting that the baselines may have difficulty identifying complex event triggers. In this case, there is not a specific amount of money to be paid, but a mention of cost. In Example 2, TANL fails to extract E2, which is triggered by ‘becoming’, and COFFEE without ranking fails to extract E1, highlighting the inability of the baselines to identify events and their corresponding arguments consistently. In contrast, our COFFEE successfully identifies the events and extracts the target arguments, demonstrating its superior performance. | Comparing COFFEE with and without ranking, we can conclude that re-ranking in the selector is crucial. In both examples, COFFEE fails to detect all events without re-ranking. Even though both candidates are the correct targets, the beam scores differ more than expected, which leads to incorrect ranking. The re-ranking can increase the probability of the second candidate and thus allowing it to be selected under the chosen threshold. | C |
For other examples of spectral algorithms (e.g., iterated Tikhonov, gradient methods, Landweber iteration, etc.), we refer to Gerfo et al. (2008). | Suppose that the eigenvalue decay rate (EDR) of ℋℋ\mathcal{H}caligraphic_H is β>1𝛽1\beta>1italic_β > 1, i.e, there are positive constants c𝑐citalic_c and C𝐶Citalic_C such that | Note that the eigenvalues λisubscript𝜆𝑖\lambda_{i}italic_λ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and EDR are only determined by the marginal distribution μ𝜇\muitalic_μ and the RKHS ℋℋ\mathcal{H}caligraphic_H. The polynomial eigenvalue decay rate assumption is standard in related literature and is also referred to as the capacity condition or effective dimension | Since the minimax optimality of spectral algorithms has been proved for the attainable case (fρ∗∈ℋ)superscriptsubscript𝑓𝜌ℋ\left(f_{\rho}^{*}\in\mathcal{H}\right)( italic_f start_POSTSUBSCRIPT italic_ρ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ∈ caligraphic_H ) (Caponnetto, 2006; Caponnetto and de Vito, 2007, etc.), a large body of literature has studied the convergence rate of the generalization error of misspecified spectral algorithms (fρ∗∉ℋsuperscriptsubscript𝑓𝜌ℋf_{\rho}^{*}\notin\mathcal{H}italic_f start_POSTSUBSCRIPT italic_ρ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ∉ caligraphic_H) and whether the rate is optimal in the minimax sense. It turns out that the qualification of the algorithm (τ>0𝜏0\tau>0italic_τ > 0), the eigenvalue decay rate (β>1𝛽1\beta>1italic_β > 1), the source condition (s>0𝑠0s>0italic_s > 0) and the embedding index (α0<1subscript𝛼01\alpha_{0}<1italic_α start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT < 1) of the RKHS jointly determine the convergence behaviors of the spectral algorithms (see Section 3.1 for definitions). If we only assume that fρ∗superscriptsubscript𝑓𝜌f_{\rho}^{*}italic_f start_POSTSUBSCRIPT italic_ρ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT belongs to an interpolation space [ℋ]ssuperscriptdelimited-[]ℋ𝑠[\mathcal{H}]^{s}[ caligraphic_H ] start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT of the RKHS ℋℋ\mathcal{H}caligraphic_H for some s>0𝑠0s>0italic_s > 0, the well known information-theoretic lower bound shows that the minimax lower bound (with respect to the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-norm generalization error) is n−sβsβ+1superscript𝑛𝑠𝛽𝑠𝛽1n^{-\frac{s\beta}{s\beta+1}}italic_n start_POSTSUPERSCRIPT - divide start_ARG italic_s italic_β end_ARG start_ARG italic_s italic_β + 1 end_ARG end_POSTSUPERSCRIPT. The state-of-the-art result shows that when α0<s≤2τsubscript𝛼0𝑠2𝜏\alpha_{0}<s\leq 2\tauitalic_α start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT < italic_s ≤ 2 italic_τ, the upper bound of the convergence rate (with respect to the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-norm generalization error) is n−sβsβ+1superscript𝑛𝑠𝛽𝑠𝛽1n^{-\frac{s\beta}{s\beta+1}}italic_n start_POSTSUPERSCRIPT - divide start_ARG italic_s italic_β end_ARG start_ARG italic_s italic_β + 1 end_ARG end_POSTSUPERSCRIPT and hence is optimal (Fischer and Steinwart 2020 for kernel ridge regression and Pillaud-Vivien et al. 2018 for gradient methods). However, when fρ∗∈[ℋ]ssuperscriptsubscript𝑓𝜌superscriptdelimited-[]ℋ𝑠f_{\rho}^{*}\in[\mathcal{H}]^{s}italic_f start_POSTSUBSCRIPT italic_ρ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ∈ [ caligraphic_H ] start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT for some 0<s≤α00𝑠subscript𝛼00<s\leq\alpha_{0}0 < italic_s ≤ italic_α start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, all the existing works need an additional boundedness assumption of fρ∗superscriptsubscript𝑓𝜌f_{\rho}^{*}italic_f start_POSTSUBSCRIPT italic_ρ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT to prove the same upper bound n−sβsβ+1superscript𝑛𝑠𝛽𝑠𝛽1n^{-\frac{s\beta}{s\beta+1}}italic_n start_POSTSUPERSCRIPT - divide start_ARG italic_s italic_β end_ARG start_ARG italic_s italic_β + 1 end_ARG end_POSTSUPERSCRIPT. The boundedness assumption will result in a smaller function space, i.e., [ℋ]s∩L∞(𝒳,μ)⫋[ℋ]ssuperscriptdelimited-[]ℋ𝑠superscript𝐿𝒳𝜇superscriptdelimited-[]ℋ𝑠[\mathcal{H}]^{s}\cap L^{\infty}(\mathcal{X,\mu})\subsetneqq[\mathcal{H}]^{s}[ caligraphic_H ] start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ∩ italic_L start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( caligraphic_X , italic_μ ) ⫋ [ caligraphic_H ] start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT when s≤α0𝑠subscript𝛼0s\leq\alpha_{0}italic_s ≤ italic_α start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. Fischer and Steinwart (2020) further reveals that the minimax rate associated with the smaller function space is larger than n−αβαβ+1superscript𝑛𝛼𝛽𝛼𝛽1n^{-\frac{\alpha\beta}{\alpha\beta+1}}italic_n start_POSTSUPERSCRIPT - divide start_ARG italic_α italic_β end_ARG start_ARG italic_α italic_β + 1 end_ARG end_POSTSUPERSCRIPT for any α>α0𝛼subscript𝛼0\alpha>\alpha_{0}italic_α > italic_α start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. This minimax lower bound is smaller than the upper bound of the convergence rate and hence they can not prove the minimax optimality of spectral algorithms when s≤α0𝑠subscript𝛼0s\leq\alpha_{0}italic_s ≤ italic_α start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. | Suppose that there exists α0>0subscript𝛼00\alpha_{0}>0italic_α start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT > 0, such that | A |
The inherent dependency of surface reconstruction methods on surface normals, makes the visual perceptual quality of a point cloud an indirect yet important aspect of any mesh processing pipeline [7]. Although it is difficult to quantify this visual degradation in the case of point cloud simplification methods, one can say that the more enhanced the characteristic features of an object (such as sharp edges and high curvature regions) are in the simplified cloud, the higher is its human perceptual quality [19]. Therefore, an optimal point cloud simplification technique should preserve both the global structural appearance, and the salient features of the point cloud in question. Some of these methods will be discussed in detail in the upcoming section. | Many different kernel functions for GPs exist, and choosing a kernel is in itself a model selection problem as some kernels are more suited to modeling certain types of data. However, one characteristic which many kernels share is that they are defined using Euclidean distance. This presents an issue should we wish to use a GP to model variation in a quantity over a non-Euclidean space. Borovitskiy et al. [5] proposed a solution to this problem in the form of an extension to the Matérn kernel, which allows for modeling of functions whose domains are compact Riemannian manifolds. The approach proposed by the authors involves two stages. Firstly, numerical estimation of the eigenvalues λnsubscript𝜆𝑛\lambda_{n}italic_λ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and eigenfunctions fnsubscript𝑓𝑛f_{n}italic_f start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT corresponding to the Laplace-Beltrami operator of the given manifold is performed. Secondly, for a manifold of dimensionality d𝑑ditalic_d, the kernel is approximated using a finite truncation of: | In this work we have presented a novel, one-shot point cloud simplification algorithm capable of preserving both the salient features and the overall structure of the original point cloud. We reduce the cloud size by up to three orders of magnitude without the need for computationally intensive training on huge datasets. This is achieved via a greedy algorithm which iteratively selects points based on a selection criterion determined by modeling the surface variation over the original point cloud using Gaussian processes with kernels which operate on Riemannian manifolds. We show that our technique achieves competitive results and runtimes when compared to a number of relevant methods, outperforming all baselines tested in terms of mean Hausdorff distance on Lucy, the largest and most complex point cloud we consider, consisting of approximately 14 million points. Our method can also be used to improve the computational efficiency of downstream tasks such as point cloud registration with no negative effects on the empirical performance. | The inherent dependency of surface reconstruction methods on surface normals, makes the visual perceptual quality of a point cloud an indirect yet important aspect of any mesh processing pipeline [7]. Although it is difficult to quantify this visual degradation in the case of point cloud simplification methods, one can say that the more enhanced the characteristic features of an object (such as sharp edges and high curvature regions) are in the simplified cloud, the higher is its human perceptual quality [19]. Therefore, an optimal point cloud simplification technique should preserve both the global structural appearance, and the salient features of the point cloud in question. Some of these methods will be discussed in detail in the upcoming section. | Given that the point cloud representing an object exists on a Riemannian manifold in 3D space, Euclidean distance fails to measure the intrinsic distance between any two points on its surface. Recently, techniques which extend existing machine learning methods to model functions defined on manifolds have gained popularity. | D |
The label masks consist of three classes, namely the Gadolinium-enhancing tumor, the peritumoral edema, and the necrotic and non-enhancing tumor core. For the binary segmentation experiments, all three classes were merged into one. | We would also like to thank the NVIDIA Corporation for donating a GPU that was used for our experiments. | 2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT images, we distribute the model over 2 GPUs. The methods HalfRes and PatchDDM were trained on one GPU only. | We performed our experiments on NVIDIA A100 GPUs with 40GB40𝐺𝐵40GB40 italic_G italic_B of memory each. | For our ablation study, we use two baselines with the same network as our proposed approach, but without patch-based training. | C |
According to [38], other adaptive algorithms such as FedAdagrad and FedYogi are proposed to improve the model convergence rate under the situation of heterogeneous data. FedAdam employs adaptive learning rates and momentum by leveraging local updates from client devices to efficiently update the global model. FedAdagrad adjusts the learning rate based on the historical gradients of each model parameter, allowing the model to converge faster and achieve better performance. FedYogi, inspired by the Yogi optimizer, incorporates elements of adaptive learning rates and momentum to handle non-convex optimization problems in FL scenarios to improve global model convergence and accuracy. We conduct numerical experiments on CIFAR-10 with 20% of participating clients. The experiment results are illustrated in Table V and Fig. 10. Compared with other adaptive FL algorithms, our proposed FedAgg still performs better with higher accuracy and a faster convergence rate. | We systematically conduct numerical experiments designed to elucidate the influence exerted by the aggregation weight α𝛼\alphaitalic_α in the objective function presented in Eq. (13) on the model efficacy and facilitate the practical application and promotion of FedAgg. As depicted in Fig. 12, the decrement of the hyperparameter α𝛼\alphaitalic_α demonstrates that the FL framework accentuates the optimization of the discrepancy between the local model of client i𝑖iitalic_i and the average local model, which in turn, bolsters the precision of the global model and expedites the convergence rate. Our findings underscore the significance of meticulous hyperparameter tuning within the FL systems. | According to [38], other adaptive algorithms such as FedAdagrad and FedYogi are proposed to improve the model convergence rate under the situation of heterogeneous data. FedAdam employs adaptive learning rates and momentum by leveraging local updates from client devices to efficiently update the global model. FedAdagrad adjusts the learning rate based on the historical gradients of each model parameter, allowing the model to converge faster and achieve better performance. FedYogi, inspired by the Yogi optimizer, incorporates elements of adaptive learning rates and momentum to handle non-convex optimization problems in FL scenarios to improve global model convergence and accuracy. We conduct numerical experiments on CIFAR-10 with 20% of participating clients. The experiment results are illustrated in Table V and Fig. 10. Compared with other adaptive FL algorithms, our proposed FedAgg still performs better with higher accuracy and a faster convergence rate. | We conduct ablation experiments to demonstrate the effectiveness of our proposed algorithm FedAgg across different local model architectures. In addition to the convolutional neural network (CNN) aforementioned, we also implement experiments on LeNet-5, AlexNet, VGG-11, ResNet-18, GoogLeNet, and DenseNet121. Noted that ResNet introduces residual network architecture, GoogLeNet adopts the Inception module and DenseNet121 employs densely connected convolutional networks to effectively alleviate vanishing gradients, enable more efficient feature propagation, and increase the model accuracy. The learning rate for each architecture is set to be 0.1 and performs T=100𝑇100T=100italic_T = 100 iterations of global training on the CIFAR-10 dataset with IID data distribution. Our results are shown in Table VI-B4. It is worth noting that FedAgg yields consistent enhancements in model performance across various local model architectures and increases the convergence rate of the global model. To observe the improvement of FedAgg across all architectures, we can visualize the intuitional experiment results in Fig. 11. | To demonstrate the effectiveness of our proposed algorithm and investigate whether the enhancements introduced by FedAgg remain consistent as the ratio of participating clients increases. Firstly, we partition the four benchmark datasets (i.e., MNIST, EMNIST-L, CIFAR-10, and CIFAR-100) into 100 clients and randomly select 20% of the total participating clients to participate in the FL training process with dynamically changed local training data and run 30, 50, 100 and 200 global communication iterations, respectively. The main experiment results are displayed in Table III. As shown in Figs. 5-8, we visualize the experiment results on all datasets with a 20% client participating ratio. It is evident that FedAgg dominates other state-of-the-art baseline methods with a faster convergence rate, higher model accuracy, lower training loss, and faster loss descending rate, which demonstrate the immense potential of the adaptive learning rate method in the FL framework. Besides, we conduct experiments with 100% participating clients to validate the effectiveness of FedAgg under the large-scale federated learning system. According to Table III, in most scenarios, FedAgg dominates the rest baselines and there is a consistent improvement in accuracy with increasing participating ratio. The above numerical experiment results illustrate that our proposed algorithm FedAgg performs well on both small-scale and large-scale FL systems, which demonstrates the capacity of FedAgg for widespread application in real-world federated learning scenarios involving monumental clients and random client participation. | C |
For GPT-4 assessment in Figure 4.1, LLaMA-Adapter obtains more ‘win’ compared to Alpaca and Alpaca-LoRA, respectively. | On a wide range of downstream tasks, we demonstrate the effectiveness of our proposed method for traditional tasks. | This fully demonstrates the effectiveness of our adaption method with zero-initialized attention mechanisms. | In this paper, we propose LLaMA-Adapter to fine-tune only lightweight zero-initialized attention mechanisms on top of the frozen LLaMA, other than updating parameters of the entire model. | If the adaption prompts are randomly initialized, they might bring disturbance to the word tokens at the beginning of training, which harms the fine-tuning stability and effectiveness. Considering this, we modify the vanilla self-attention at the last L𝐿Litalic_L layers to be zero-initialized variants, as shown in Figure 2. | B |
K=2𝐾2K=2italic_K = 2 noun phrases are extracted for each caption and then prompted with a set of prompt templates such as “It is a video of {noun}”. | We directly use pre-trained models to extract video features as the input to G-TAD and do not further train the encoder. | In particular, the same model pre-trained on VideoCC achieves the best performance in zero-shot retrieval on MSR-VTT, compared with HowTo100M and WebVid-2M. | S-ViLM also achieves performance gain when the model is fine-tuned on the target MSR-VTT dataset, which further validates advantages of the pre-trained model. | Model architecture. We use a 12-layer ViT-base model with the patch size of 2×16×16216162\times 16\times 162 × 16 × 16 as the video encoder and initialize it with weights pre-trained on Kinetics-400. | D |
Motivated by this, we introduce ContraSim, a new similarity measure for interpreting NNs, based on contrastive learning (CL) (Chen et al., 2020; He et al., 2020). Contrary to prior work (e.g., Raghu et al., 2017; Kornblith et al., 2019), which defines closed-form general-purpose similarity measures, ContraSim is a learnable similarity measure that uses examples with a high similarity (the positive set) and examples that have a low similarity (the negative set), to train an encoder that maps representations to the space where similarity is measured. In the projected space, representation similarity is maximized with positive examples and minimized with negative examples. Our approach allows specializing the similarity measure to a particular domain, to obtain a more reliable and specific analysis. The similarity between projected representations is determined using a simpler closed-form measure. | We experimentally evaluate ContraSim on standard benchmark for similarity measures – the layer prediction benchmark Kornblith et al. (2019), and two new benchmarks we introduce in this paper: the multilingual benchmark and the image–caption benchmark. In experiments with both language and vision models and multiple datasets, ContraSim outperforms common similarity measures. In addition, we investigate a more challenging scenario, where during evaluation instead of choosing a random sentence, we retrieve a highly similar sentences as confusing examples, using the Facebook AI Similarity Search (FAISS) library Johnson et al. (2019). While other similarity measures are highly affected by this change, our method maintains a high accuracy with a very small degradation. We attribute this to the highly separable representations that our method learns. Even when ContraSim is trained on data from one domain/task and evaluated on data from another domain/task, it achieves superior performance. | Our method outperformed other similarity measures under the common layer prediction benchmark and two new benchmarks we proposed: the multilingual benchmark and the image–caption benchmark. It particularly shines in strengthened versions of said benchmarks, where random sampling is replaced with finding the most similar examples using FAISS. Moreover, we show that even when ContraSim is trained on data from one domain/task and evaluated on data from another domain/task, it achieves superior performance. | Our method, ContraSim, achieves excellent results. When trained on one dataset’s training set and evaluated on the same dataset’s test set, ContraSim achieves perfect accuracy under this benchmark, with a large margin over CKA results. This holds for both language and vision cases. Even when trained on one dataset and evaluated over another dataset, ContraSim surpasses other similarity measures, showing the transferability of the learned encoder projection between datasets. This is true both when transferring across domains (in text, between news texts from the Penn Treebank and Wikipedia texts), and when transferring across classification tasks (in images, between the 10-label CIFAR-10 and the 100-label CIFAR-100). | For evaluation, we use the known layer prediction benchmark and two new benchmarks we design: the multilingual benchmark and the image–caption benchmark. We further propose a strengthened version of the last two using the FAISS software. | A |
1 Units: pan ϕitalic-ϕ\phiitalic_ϕ, tilt θ𝜃\thetaitalic_θ, and roll ψ𝜓\psiitalic_ψ [deg]; f𝑓fitalic_f [mm]; k1subscript𝑘1k_{1}italic_k start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT [dimensionless]; REPE [pixel]; Executable rate [%] | 2 Implementations: López-Antequera [36], Wakai [58], Wakai [59], and ours using PyTorch [43]; Pritts [44] and Lochman [35] using The MathWorks MATLAB | Figure 17: Qualitative results in the cross-domain evaluation on the HoliCity test set. Our method using HRNet-W32 and compared methods were trained on SL-MH. From top to bottom: input images, ground-truth images, and results of López-Antequera et al. [36], Wakai and Yamashita [58], Wakai et al. [59], and our method. | Figure 6: Qualitative results on the test sets. (a) Results of conventional methods. From left to right: input images, ground truth (GT), and results of López-Antequera et al. [36], Wakai and Yamashita [58], Wakai et al. [59], Pritts et al. [44], and Lochman et al. [35]. (b) Results of our method. From left to right: input images, GT, and the results of our method using HRNet-W32 in a Manhattan world. | Figure 18: Qualitative results for images from off-the-shelf cameras. From top to bottom: input images and results of López-Antequera et al. [36], Wakai and Yamashita [58], Wakai et al. [59], Pritts et al. [44], Lochman et al. [35], and our method. The identifiers (IDs) correspond to the camera IDs used in [59], and the projection names are shown below the IDs. | A |
\mkern 1.5mu}\mkern 1.5mu(s)\mbox{ is Hurwitz}\}italic_σ ( italic_L ) ⊂ roman_Λ := { italic_λ ∈ italic_C : over¯ start_ARG italic_d end_ARG ( italic_s ) + italic_λ over¯ start_ARG italic_n end_ARG ( italic_s ) is Hurwitz }. | are the aggregate input, output and state vectors, respectively, and ⊗tensor-product\otimes⊗ denotes the Kronecker product. No explicit assumptions are imposed a priori on the connectivity properties of the graph that describes the interconnection network. | To show the equivalence of the six statements in Theorem 1, the proof is structured as follows. We first prove the equivalence among statements (i), (ii) and (iii). Then, we prove the following chain of implications: | The topology of a directed graph 𝒢𝒢\mathcal{G}caligraphic_G with N∈\mathbbN𝑁\mathbb𝑁N\in{\mathbb N}italic_N ∈ italic_N nodes is characterized by the weighted adjacency matrix 𝒲∈\mathbbRN×N𝒲\mathbbsuperscript𝑅𝑁𝑁{\mathcal{W}}\in\mathbb{R}^{N\times N}caligraphic_W ∈ italic_R start_POSTSUPERSCRIPT italic_N × italic_N end_POSTSUPERSCRIPT whose entry 𝒲ij≥0subscript𝒲𝑖𝑗0\mathcal{W}_{ij}\geq 0caligraphic_W start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ≥ 0 denotes the weight of the edge pointing from node j𝑗jitalic_j to node i𝑖iitalic_i. Defining the diagonal matrix D:=diag(𝒲𝟏N)assign𝐷diag𝒲subscript1𝑁D:=\textrm{diag}({\mathcal{W}}{\mathbf{1}}_{N})italic_D := diag ( caligraphic_W bold_1 start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT ), we can introduce the Laplacian matrix L:=D−𝒲assign𝐿𝐷𝒲L:=D-{\mathcal{W}}italic_L := italic_D - caligraphic_W associated with the graph | We prove the following standard result to highlight that no assumptions on the graph 𝒢𝒢\mathcal{G}caligraphic_G are needed. | D |
Louloudakis et al. studied behavioral issues resulting from framework-to-framework conversion (Louloudakis et al., 2023a). | They found failures in 10 out of 36 conversions. They created a fault localization and repair pipeline to localize and fix discrepancies (Louloudakis et al., 2023b). | In contrast, prior research on DL model converters is limited to measuring conversions of 5 DL models (Openja et al., 2022). | In the failure analysis, we found that crashes were largely due to Incompatibilities or Type Problems (Table 10). | Louloudakis et al. studied behavioral issues resulting from framework-to-framework conversion (Louloudakis et al., 2023a). | A |
They show the internal status of the system, leading to a more detailed explanation of what is happening. | The experiment proves that it is only needed to change the platform and the state estimation component (in this case, a plugin inside of it) to translate the experiment from simulation to the real world, even when the real system is heterogeneous. | TF tree generation: The module is in charge of generating the transformation trees [17] that will be used for the rest of the framework, allowing the system to represent information in different coordinate frames. This module is also in charge of managing the origin of the coordinated system in a multi-robot system. | To facilitate the implementation of different aerial platforms, Aerostack2 incorporates an AerialPlatform abstract class responsible for managing the capabilities associated with the direct integration of various aerial platforms into the framework. This abstraction facilitates the integration of new platforms into the framework and ensures compatibility with the entire framework. The responsibility of this interface is to gather the sensory measurements from the aircraft and transmit them to the rest of the system. In addition, it is tasked with receiving actuator commands and other requests from the various layers of the Aerostack2 framework and relaying them to the aircraft in a platform-specific manner. | The Alphanumeric Viewer is a component that monitors the state of specific variables of the system, e.g. sensor measurements, values corresponding to state estimation, references for controllers, etc. The information is distributed in different panes to facilitate the search for a specific variable of the system. | D |
This was just the beginning of an ongoing philosophical discussion, which has often included psychological elements, around human creativity (Barron, 1955, Berlyne, 1960, Bruner, 1962, Newell et al., 1962, Stein, 1974), as well as computational creativity (Macedo et al., 2004, Wiggins, 2006, Jordanous, 2009, Boden, 2009, Maher, 2010, Colton and Wiggins, 2012). | In general, computer scientists have always been fascinated by the possibility of building machines able to express themselves through writing, e.g., by composing poems and short stories, creating paintings, and so on. In particular, the rise of automatic text generation was contextual to the birth of personal computers. Examples include the Computerized Haiku by Margaret Masterman555http://www.in-vacua.com/cgi-bin/haiku.pl, the storyteller TALE-SPIN (Meehan, 1977), Racter and its poems’ book (Racter, 1984), and UNIVERSE, which was able to generate coherent and consistent characters (Lebowitz, 1983), just to name a few. Different techniques have been explored, from planning (e.g., Riedl and Young (2010)) and case-based reasoning (e.g., Turner (1994)) to evolutionary strategies (e.g., Manurung et al. (2012)). Some approaches combine all of them together (Gervás, 2013). | Nevertheless, the recent advancements in LLMs can be attributed to the introduction of fine-tuning through reinforcement learning from human feedback (RLHF) (Christiano et al., 2017). It consists of three steps: fine-tuning the pre-trained model in a supervised fashion on human-produced answers to sampled questions; training a reward model to predict which text among different options is the most appropriate based on human-labeled rankings; and fine-tuning the language model to maximize the learned reward (Stiennon et al., 2020). Although the main goal of RLHF is to improve conversational skills while mitigating mistakes and biases, it has also led to models capable of producing on-demand poems, songs, and novels, gaining global popularity666https://www.forbes.com/sites/martineparis/2023/02/03/chatgpt-hits-100-million-microsoft-unleashes-ai-bots-and-catgpt-goes-viral/?sh=70994247564e. Based on RLHF, first ChatGPT777https://openai.com/blog/chatgpt/ and then GPT-4 paved the way for several other similar models: Google’s Gemini (Gemini Team and Google, 2023), which extends to multimodal data; Meta’s Llama models (Dubey et al., 2024, Touvron et al., 2023), which replace RLHF with the more efficient direct preference optimization (DPO) (Rafailov et al., 2023); Mixtral (Jiang et al., 2024), which adaptively selects its layers’ parameters from distinct groups to increase the total parameter count without raising computational costs; and many others, as the competition intensifies day by day (Zhao et al., 2023). While they may differ in some technical details, these LLMs are always pre-trained on vast, general corpora of data and then fine-tuned using some form of RLHF to enhance their conversational skills. | Language plays a vital role in how we think, communicate, and interact with others111As remarked by ChatGPT itself when asked about the importance of language.. It is therefore of no surprise that natural language generation has always been one of the prominent branches of artificial intelligence (Jurafsky and Martin, 2023). We have witnessed a very fast acceleration of the pace of development in the past decade culminated with the invention of transformers (Vaswani et al., 2017). The possibility of exploiting large-scale data sets and the availability of increasing computing capacity has led to the definition of the so-called foundation models, which are able to achieve state-of-the-art performance in a variety of tasks (Bommasani et al., 2021). | Value refers to utility, performance, and attractiveness (Maher, 2010). It is also related to both the quality of the output, and its acceptance by society. Due to the large impact LLMs are already having (Bommasani et al., 2021) and the quality of outputs of the systems based on them (Stevenson et al., 2022b), it is possible to argue that the artifacts produced by them are indeed valuable. | A |
Both the indicator and modulating factor are solely related to the predicted probability rather than the generated pseudo nodule labels. Thus they are determined by the intrinsic foreground-background discrimination instead of the model predictions. Then the selected modulating factor is able to down-weight the well-classified instances’ entropy and focus on the hard ones. As in [60], γ𝛾\gammaitalic_γ is a tunable focusing parameter to control the rate of down-weighting, and α𝛼\alphaitalic_α is another hyper-parameter for dealing with the class imbalance. It is noteworthy that our proposed modulating factor choice mechanism here can be treated as a self-paced learning method [61, 62]. Since the instances which have predicted probabilities between τ1subscript𝜏1\tau_{1}italic_τ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and τ2subscript𝜏2\tau_{2}italic_τ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are neglected, at the beginning of the second step, there are not much instances involved in calculating the WE loss, and as the training progresses, more and more instances are used. This further ensures the modulating factor being proper. | To sum up, the loss function for training the student model comprises the supervised loss given the pseudo ground-truth nodules generated by the teacher model and the unsupervised WE loss: | This instance-level foreground-background discrimination not only avoids the requirement of target annotations, but also alleviates the problem of dominance of backgrounds in pulmonary nodule detection, which is common in small-scale object detection tasks. For optimal contrastive learning, we propose an auto-labeling mechanism to select the foreground instances (nodules) and background instances. Second, in Sec. III-B, we duplicate the adapted model into a teacher model and a student model. The teacher model is utilized to generate pseudo nodules for the supervision of the student model training. We choose to also update the teacher model using the student model’s weights to improve the accuracy of the pseudo nodules. To mitigate the adverse effect associated with pseudo label noise, we propose a weighted entropy loss as an additional unsupervised constraint for training the student model. This loss facilitates intrinsic foreground-background discrimination, thus aiding the adaptation process. The initial step of our approach ensures the generation of more accurate initial pseudo nodules for the second step, primarily utilizing knowledge from the source domain. The subsequent step then focuses on assimilating and leveraging characteristics unique to the target domain. | Nonetheless, these works mainly focus on the shifts between the source and target, and neglect the detection’s characteristics. For instance, the discrimination of the foreground objects and the backgrounds can naturally be an auxiliary supervision for the target data. Besides, the relatively smaller size of the nodules compared with the objects in natural images degrades the performance of general SFUDA object detection approaches. To address these two limitations for our introduced SFUDA pulmonary nodule detection task, we propose a novel Source-free Unsupervised cross-domain method for Pulmonary nodule detection (SUP), termed Instance-level Contrastive Instruction fine-tuning framework (ICI). The SUP-ICI is a two-step method as illustrated in Fig. 2. First, to leverage the discrepancy between the feature representations of nodules and other entities in the computed tomography (CT) images, we employ instance-level contrastive learning (CL) for adapting the source model to the target domain. This strategy eliminates the requirement for annotations in the target images. More importantly, its use of instance-level foreground-background discrimination realizes to focus on the small-scale features, eliminating being distracted by the dominant backgrounds in pulmonary nodule detection. Given the domain shifts between the source and target domains, we set a high pre-defined threshold for the region proposal network (RPN) classifier to auto-label the nodules for optimal CL. This initial adaptation step enhances the model’s ability to accurately detect nodules on the target domain, providing more accurate initial pseudo nodule generation for the second step. Second, the adapted model is duplicated into a teacher model and a student model for further training to elevate the pulmonary nodule detection performance. The teacher functions to generate pseudo nodules for supervising the training of the student. The weights of the student can in turn be utilized to update the teacher to further improve the accuracy of the pseudo nodules. In order to relieve the negative effect of the pseudo label noise, we propose a weighted entropy (WE) loss as an additional unsupervised constraint to facilitate the student training. The WE loss is different from the general entropy loss, and designed particularly for the detection network. It considers that in detection the non-nodule instances are usually much more than the nodules, and re-weights the entropy loss to down-weight the overly confident non-nodule instances and make the detector pay more attention to the less confident nodules. The WE loss also uses intrinsic foreground-background discrimination [22] to facilitate the further adaptation of the model. | In order to further reduce the negative impact of the pseudo nodule noise, we propose to introduce an additional unsupervised constraint for the student model training and design a weighted entropy (WE) loss. Considering the success of the entropy loss in dealing with the unlabeled data in the semi-supervised and unsupervised image classification tasks, we employ the entropy loss as an additional unsupervised constraint for the nodule and non-nodule classification on the unlabeled target domain. For one thing, the object recognition is much more difficult than the localization as claimed in [59], and for another, we assume that the confident classification enforces the accurate localization. However, different from the general image classification, there exists a class imbalance issue in object detection. In pulmonary nodule detection, the predictions of the non-nodule instances are dominant and overly confident, whereas the predictions of the nodules are the opposite. Simply adopting the original entropy loss can increase this nodule and non-nodule imbalance problem. Similar to the Focal loss[60], we propose to re-weight the original entropy loss using the predicted probability, thus down-weighting the easy instances and focusing on the hard ones for the classifier of the R-CNN [23, 56]: | A |
We use the double superscript in 𝜹~diksuperscriptsubscript~𝜹𝑑𝑖𝑘\tilde{\bm{\delta}}_{d}^{ik}over~ start_ARG bold_italic_δ end_ARG start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i italic_k end_POSTSUPERSCRIPT to represent that, for each instance of \widebar𝐝i\widebarsuperscript𝐝𝑖\widebar{\mathbf{d}}^{i}bold_d start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT, we need to sample an independent set of scenarios {𝝎ik}k=1Ksuperscriptsubscriptsuperscript𝝎𝑖𝑘𝑘1𝐾\{\bm{\omega}^{ik}\}_{k=1}^{K}{ bold_italic_ω start_POSTSUPERSCRIPT italic_i italic_k end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT. | To evaluate the quality of a specific first-stage solution, we compute the total cost arising from it by aggregating both the first-stage cost and the corresponding cost-to-go. The cost-to-go associated with a given first-stage solution is derived by solving the second-stage problem defined in (8) over a common set of 500500500500 load realizations. We then compare the performance of different methods based on their resulting average total costs and solving times on the testing dataset. | The goal of training is to learn the optimal values for 𝐰0superscript𝐰0\mathbf{w}^{0}bold_w start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT and 𝐰Rsuperscript𝐰𝑅\mathbf{w}^{R}bold_w start_POSTSUPERSCRIPT italic_R end_POSTSUPERSCRIPT that minimize a specifically designed loss function. Unlike standard machine learning practices, where the loss function is based on prediction errors using generated ground truth data, our model is trained in an unsupervised manner, which means the training dataset does not incorporate ground truth information. Instead, the parameters of ϕ0superscriptitalic-ϕ0\phi^{0}italic_ϕ start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT and ϕRsuperscriptitalic-ϕ𝑅\phi^{R}italic_ϕ start_POSTSUPERSCRIPT italic_R end_POSTSUPERSCRIPT are updated to minimize the empirical mean of the two-stage problem’s total cost. | Note that the loss function in (9) represents exactly the average total cost of the two-stage DCOPF problem across the training dataset. In particular, the parameters of | solve the two-stage DCOPF problem, including the overall architecture design, the training of it and the decision-making procedure. | C |
\star}|f_{\theta}(x_{\star}))]italic_p ( italic_y start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT | italic_x start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT , caligraphic_D ) = blackboard_E start_POSTSUBSCRIPT italic_p ( italic_θ | caligraphic_D ) end_POSTSUBSCRIPT [ italic_p ( italic_y start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT | italic_f start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT ) ) ]. | The most common approaches to remedy this are through designing better priors, typically over network parameters (Louizos et al., 2017; Nalisnick, 2018; Atanov et al., 2019; Fortuin et al., 2021b) or predictive functions directly (Sun et al., 2019; Tran et al., 2020; Matsubara et al., 2021; D’Angelo & Fortuin, 2021, see Fortuin (2022) for an overview). | Conventionally, BNN researchers have focused on improving predictive performance using human-crafted priors over network parameters or predictive functions (e.g., Louizos et al., 2017; Tran et al., 2020; Matsubara et al., 2021; Fortuin et al., 2021a). | Improving BNN priors has been a long-standing goal for the community, primarily through improved human-designed priors. One approach is to improve the prior over the network’s parameters (Louizos et al., 2017; Nalisnick, 2018). Others place priors directly over predictive functions (Flam-Shepherd et al., 2017; Sun et al., 2019; Matsubara et al., 2021; Nalisnick et al., 2021; Raj et al., 2023). Both approaches, however, present challenges—the mapping between the network’s parameters and predictive functions is complex, while directly specifying our beliefs over predictive functions is itself a highly challenging task. For these reasons, as well as convenience, isotropic Gaussian priors over network parameters remain the most common choice (Fortuin, 2022), despite concerns (Wenzel et al., 2020). | To do this, we will draw on data augmentation (Yaeger et al., 1996; Krizhevsky et al., 2012; Shorten & Khoshgoftaar, 2019) and contrastive learning (Oord et al., 2019; Chen et al., 2020b; a; Grill et al., 2020; Hénaff et al., 2020; Chen & He, 2020; Foster et al., 2020; Miao et al., 2023). | C |
Yet the ability to model, analyze and predict the evolution over time of the geometric features of data is of paramount interest in many applications. For example, cell differentiation can be studied by analyzing time series of single-cell mRNA expression data (scRNA); a core problem here is to quantify changes in gene expression profiles for cells collected during the process of development. Specifically, the data looks like a sample (Xt)t=1Tsuperscriptsubscriptsubscript𝑋𝑡𝑡1𝑇(X_{t})_{t=1}^{T}( italic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT where Xt⊂ℝNsubscript𝑋𝑡superscriptℝ𝑁X_{t}\subset\mathbb{R}^{N}italic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ⊂ blackboard_R start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT for N𝑁Nitalic_N in the tens of thousands. Cell type is captured in part by the cluster structure of the point clouds of expression vectors. Changes in the shape of these point clouds reflect differentiation events such as the emergence of new cell types. More precisely, so-called bifurcation events reflect when an ancestral cell type changes into multiple lineages, and can be detected by change in shape. Examples of studies of this kind include [77], which profiles several hundred thousand cells from mouse embryonic fibroblasts and provides evidence that shape provides insight into developmental trajectories. A toy example of a developmental process is given by modeling the genomic profiles of cells as generated by sampling from a superposition of two spherical Gaussians with centers moving apart over time. This represents the emergence of two distinct cell types from undifferentiated stem cells; see Figure 1.1. | quadratic range-based self-normalized test statistics to detect gradual and abrupt topological changes. This provides the applied researcher with a simple tool for inference on the true underlying shape dynamics over time. Returning to the example of cell differentiation, we use these results to test for changes in shape via topological shape descriptors in Section 5.2. However, our theory can also be used to test for the evolution of the number of clusters by working with other shape descriptors, such as hierarchical clustering dendrograms (Figure 1.2-1.3). | The need for reliable inference on shape and topological features in applications has led to substantial interest in integrating classical statistical techniques with topological invariants. Roughly speaking, TDA provides qualitative multiscale shape descriptors for point clouds, notably persistent homology. This is a higher-dimensional generalization of hierarchical clustering, encoding the feature scales at which “holes” of various dimensions appear and disappear. We refer the unfamiliar reader to Section 5.1. Key was the early work [63, 11], which established that the space where these topological shape descriptors (known as barcodes or persistence diagrams) take their values is Polish. | That is, a stable shape descriptor is a Lipschitz function from the set of compact metric spaces to a Polish space. As we shall see, this condition suffices for our framework. Notable examples of stable shape descriptors include dendograms [22], many of the invariants of TDA such as persistent homology and zigzag persistence, and metric geometry invariants such as the distance distribution. In Section 5.2, we will explore an application in the context of persistent homology, the main shape descriptor from TDA. | Persistent homology gives a rich family of qualitative stable shape descriptors that capture topological features of a point cloud progressively across different feature scales. We give a brief review of TDA and specifically persistent homology in Section 5. In short, the k𝑘kitalic_kth persistent homology, PHk𝑃subscript𝐻𝑘PH_{k}italic_P italic_H start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT, captures how k𝑘kitalic_k-dimensional holes (i.e., connected components, tunnels, voids, etc.) appear and disappear as the feature scale changes. | B |