text
stringlengths 64
6.93k
|
---|
,Unnamed: 0.1,TweetID,AuthorID,AuthorName,Tweets,arxiv_link,Abstract,Title,Thread_length,Tweets_coarse,year,month,tweet_length |
0,31,1463733877021700096,1237589672609476611,Zhimei Ren,"Excited to introduce our new paper on sensitivity analysis of individual treatment effects: <LINK>. Predictive inference of counterfactuals and ITEs, robust to unmeasured confounding. Joint work with @YingJin531 and Emmanuel Candès. (1/n) @YingJin531 2/ Previous work (@lihua_lei_stat ) provides calibrated predictive inference of counterfactuals and ITEs with experimental/unconfounded data. What happens when there is unmeasured confounding? @YingJin531 3/ The distributional shift from observations to counterfactuals becomes unidentifiable, but it is bounded (partially identifiable) under the sensitivity model! @YingJin531 4/ We propose robust conformal inference procedures when the test distribution differs from the training distribution by a limited amount; it wraps around any ML algorithm to produce valid prediction sets and works for a general class of robust prediction problems. @YingJin531 5/ How do we know whether a specific individual benefits from a treatment? How robust is our conclusion against unmeasured confounding? The procedure outputs a numerical value to quantify the robustness of causal conclusion on an ITE against unmeasured confounding.",https://arxiv.org/abs/2111.12161,"We propose a model-free framework for sensitivity analysis of individual treatment effects (ITEs), building upon ideas from conformal inference. For any unit, our procedure reports the $\Gamma$-value, a number which quantifies the minimum strength of confounding needed to explain away the evidence for ITE. Our approach rests on the reliable predictive inference of counterfactuals and ITEs in situations where the training data is confounded. Under the marginal sensitivity model of Tan (2006), we characterize the shift between the distribution of the observations and that of the counterfactuals. We first develop a general method for predictive inference of test samples from a shifted distribution; we then leverage this to construct covariate-dependent prediction sets for counterfactuals. No matter the value of the shift, these prediction sets (resp. approximately) achieve marginal coverage if the propensity score is known exactly (resp. estimated). We describe a distinct procedure also attaining coverage, however, conditional on the training data. In the latter case, we prove a sharpness result showing that for certain classes of prediction problems, the prediction intervals cannot possibly be tightened. We verify the validity and performance of the new methods via simulation studies and apply them to analyze real datasets. ","Sensitivity Analysis of Individual Treatment Effects: A Robust Conformal |
Inference Approach",5,"['Excited to introduce our new paper on sensitivity analysis of individual treatment effects: <LINK>. \nPredictive inference of counterfactuals and ITEs, robust to unmeasured confounding. Joint work with @YingJin531 and Emmanuel Candès. (1/n)', '@YingJin531 2/ \nPrevious work (@lihua_lei_stat ) provides calibrated predictive inference of counterfactuals and ITEs with experimental/unconfounded data. What happens when there is unmeasured confounding?', '@YingJin531 3/\nThe distributional shift from observations to counterfactuals becomes unidentifiable, but it is bounded (partially identifiable) under the sensitivity model!', '@YingJin531 4/\nWe propose robust conformal inference procedures when the test distribution differs from the training distribution by a limited amount; it wraps around any ML algorithm to produce valid prediction sets and works for a general class of robust prediction problems.', '@YingJin531 5/ \nHow do we know whether a specific individual benefits from a treatment? How robust is our conclusion against unmeasured confounding? The procedure outputs a numerical value to quantify the robustness of causal conclusion on an ITE against unmeasured confounding.']",21,11,1172 |
1,172,1230779292386254849,174298756,Adel Bibi,"I'm very excited about this new work in collaboration with @MotassimF, Hasan Hammoud, Mohamed Gaafar and Bernard Ghanem. We study the decision boundaries through the lens of tropical geometry in a single hidden layer neural network. <LINK> <LINK> @MotassimF We revisit the lottery ticket hypothesis through our new framework. Moreover, we propose new novel geometrical approaches for network pruning and the construction of adversarial attacks.",https://arxiv.org/abs/2002.08838,"This work tackles the problem of characterizing and understanding the decision boundaries of neural networks with piecewise linear non-linearity activations. We use tropical geometry, a new development in the area of algebraic geometry, to characterize the decision boundaries of a simple network of the form (Affine, ReLU, Affine). Our main finding is that the decision boundaries are a subset of a tropical hypersurface, which is intimately related to a polytope formed by the convex hull of two zonotopes. The generators of these zonotopes are functions of the network parameters. This geometric characterization provides new perspectives to three tasks. (i) We propose a new tropical perspective to the lottery ticket hypothesis, where we view the effect of different initializations on the tropical geometric representation of a network's decision boundaries. (ii) Moreover, we propose new tropical based optimization reformulations that directly influence the decision boundaries of the network for the task of network pruning. (iii) At last, we discuss the reformulation of the generation of adversarial attacks in a tropical sense. We demonstrate that one can construct adversaries in a new tropical setting by perturbing a specific set of decision boundaries by perturbing a set of parameters in the network. ","On the Decision Boundaries of Neural Networks: A Tropical Geometry |
Perspective",2,"[""I'm very excited about this new work in collaboration with @MotassimF, Hasan Hammoud, Mohamed Gaafar and Bernard Ghanem.\n\nWe study the decision boundaries through the lens of tropical geometry in a single hidden layer neural network.\n\n<LINK> <LINK>"", '@MotassimF We revisit the lottery ticket hypothesis through our new framework. Moreover, we propose new novel geometrical approaches for network pruning and the construction of adversarial attacks.']",20,02,444 |
2,101,1468936846927450117,871354083151667200,Tomek Korbak,"How to fine-tune conditional language models with minimal catastrophic forgetting? A thread about our new #NeurIPS21 #CtrlGen workshop paper with @hadyelsahar, @germank and @MarcDymetman. <LINK> 1/7 <LINK> @hadyelsahar @germank @MarcDymetman Imagine you have a @GitHubCopilot-like Python LM that you want to force to generate Python functions that always compile and respect PEP8. Or imagine you're using a T5 summariser and, concerned about hallucination, are looking for ways of forcing the model to be factual. 2/7 In general, the goal is to fine-tune a pre-trained model to generate samples x satisfying a constraint, b(x). However, the fine-tuned model must stay close to the original, a, in order to avoid catastrophic forgetting. 3/7 We represent the desired behaviour of a fine-tuned model conditioned on a context c as a normalised product of two factors: the original model (conditioned on c) and a binary scorer b(x,c). 4/7 <LINK> To approximate the target distribution p_c representing desired behaviour, we train a new model π_θ to minimise expected the cross-entropy between πθ conditioned on a context c (drawn from τ) and its target distribution p_c. 5/10 <LINK> For summarisation, we found that CPDG, our method indeed increases the number of relevant named entities in summaries (prediction-source) which in turn increases the Rouge-1 score of generated summaries (computed with respect to their ground-truth summaries). 6/7 <LINK> For code generation, we were able to increase the compilability and decreases the number of PEP8 violations while staying close to the statistics of the original model (e.g. in terms of average sample length). This is in contrast with RL baselines. 7/7 <LINK> Come see our poster on Monday, 13 December at NeurIPS CtrlGen workshop! And here's link to the paper again: <LINK>",https://arxiv.org/abs/2112.00791,"Machine learning is shifting towards general-purpose pretrained generative models, trained in a self-supervised manner on large amounts of data, which can then be applied to solve a large number of tasks. However, due to their generic training methodology, these models often fail to meet some of the downstream requirements (e.g. hallucination in abstractive summarization or wrong format in automatic code generation). This raises an important question on how to adapt pre-trained generative models to a new task without destroying its capabilities. Recent work has suggested to solve this problem by representing task-specific requirements through energy-based models (EBMs) and approximating these EBMs using distributional policy gradients (DPG). Unfortunately, this approach is limited to unconditional distributions, represented by unconditional EBMs. In this paper, we extend this approach to conditional tasks by proposing Conditional DPG (CDPG). We evaluate CDPG on three different control objectives across two tasks: summarization with T5 and code generation with GPT-Neo. Our results show that fine-tuning using CDPG robustly moves these pretrained models closer towards meeting control objectives and -- in contrast with baseline approaches -- does not result in catastrophic forgetting. ","Controlling Conditional Language Models with Distributional Policy |
Gradients",8,"['How to fine-tune conditional language models with minimal catastrophic forgetting? A thread about our new #NeurIPS21 #CtrlGen workshop paper with @hadyelsahar, @germank and @MarcDymetman. <LINK> 1/7 <LINK>', ""@hadyelsahar @germank @MarcDymetman Imagine you have a @GitHubCopilot-like Python LM that you want to force to generate Python functions that always compile and respect PEP8. Or imagine you're using a T5 summariser and, concerned about hallucination, are looking for ways of forcing the model to be factual. 2/7"", 'In general, the goal is to fine-tune a pre-trained model to generate samples x satisfying a constraint, b(x). However, the fine-tuned model must stay close to the original,\xa0a, in order to avoid catastrophic forgetting. 3/7', 'We represent the desired behaviour of a fine-tuned model conditioned on a context c as a normalised product of two factors: the original model (conditioned on c) and a binary scorer b(x,c). 4/7 https://t.co/QyPfeueg0n', 'To approximate the target distribution p_c representing desired behaviour, we train a new model π_θ to minimise expected the cross-entropy between πθ conditioned on a context c (drawn from τ) and its target distribution p_c. 5/10 https://t.co/OnitxhyWAB', 'For summarisation, we found that CPDG, our method indeed increases the number of relevant named entities in summaries (prediction-source) which in turn increases the Rouge-1 score of generated summaries (computed with respect to their ground-truth summaries). 6/7 https://t.co/BSbSiwC0bQ', 'For code generation, we were able to increase the compilability and decreases the number of PEP8 violations while staying close to the statistics of the original model (e.g. in terms of average sample length). This is in contrast with RL baselines. 7/7 https://t.co/5SP9p0xdHi', ""Come see our poster on Monday, 13 December at NeurIPS CtrlGen workshop! And here's link to the paper again: https://t.co/1TVyMxQptu""]",21,12,1824 |
3,58,1417870039378583554,1360892702,Ekin Dogus Cubuk,"New paper on ML & physics at ICML! Learn2Hop: Learned Optimization on Rough Landscapes With Applications to Atomic Structural Optimization We adapt learned optimizers for atomic structural optimization, and compare to baselines from physics. abs: <LINK> <LINK> Finding the low-energy minima of atomic energy landscapes efficiently is one of the most important challenges in materials science. ML has become ubiquitous in physics as a curve-fitting tool, but hasn't been utilized as much for optimization. On the well-studied benchmark of LJ clusters, learned optimizers generalize well across system sizes. They increase the success rate on simple landscapes like LJ13, and lower the final energy on glassy landscapes such as LJ75. <LINK> Learned optimizers outperform baselines such as Adam and basin-hopping on more realistic landscapes such as ""glassy"" gold clusters and amorphous silicon. <LINK> Learned optimizers also show some generalization across the periodic table: e.g. when trained only on gold-silver clusters, they can outperform baselines on palladium-platinum clusters. They do better however if also trained on clusters containing palladium or platinum. <LINK> Still too challenging to train these optimizers on more expensive potentials such as DFT, but we are excited about the future of this direction for involving ML in atomic structural optimization! with @Luke_Metz and @sschoenholz, led by the incredible resident @amilmerchant. Check out our presentation at ICML for more details: <LINK>",http://arxiv.org/abs/2107.09661,"Optimization of non-convex loss surfaces containing many local minima remains a critical problem in a variety of domains, including operations research, informatics, and material design. Yet, current techniques either require extremely high iteration counts or a large number of random restarts for good performance. In this work, we propose adapting recent developments in meta-learning to these many-minima problems by learning the optimization algorithm for various loss landscapes. We focus on problems from atomic structural optimization--finding low energy configurations of many-atom systems--including widely studied models such as bimetallic clusters and disordered silicon. We find that our optimizer learns a 'hopping' behavior which enables efficient exploration and improves the rate of low energy minima discovery. Finally, our learned optimizers show promising generalization with efficiency gains on never before seen tasks (e.g. new elements or compositions). Code will be made available shortly. ",Learn2Hop: Learned Optimization on Rough Landscapes,7,"['New paper on ML & physics at ICML!\n \nLearn2Hop: Learned Optimization on Rough Landscapes\nWith Applications to Atomic Structural Optimization\n \nWe adapt learned optimizers for atomic structural optimization, and compare to baselines from physics. \n\nabs: <LINK> <LINK>', ""Finding the low-energy minima of atomic energy landscapes efficiently is one of the most important challenges in materials science.\n \nML has become ubiquitous in physics as a curve-fitting tool, but hasn't been utilized as much for optimization."", 'On the well-studied benchmark of LJ clusters, learned optimizers generalize well across system sizes. They increase the success rate on simple landscapes like LJ13, and lower the final energy on glassy landscapes such as LJ75. https://t.co/dlCIUqORmm', 'Learned optimizers outperform baselines such as Adam and basin-hopping on more realistic landscapes such as ""glassy"" gold clusters and amorphous silicon. https://t.co/UqGOaHPKrw', 'Learned optimizers also show some generalization across the periodic table: e.g. when trained only on gold-silver clusters, they can outperform baselines on palladium-platinum clusters.\n \nThey do better however if also trained on clusters containing palladium or platinum. https://t.co/ksByIKyZJh', 'Still too challenging to train these optimizers on more expensive potentials such as DFT, but we are excited about the future of this direction for involving ML in atomic structural optimization!\n \nwith @Luke_Metz and @sschoenholz, led by the incredible resident @amilmerchant.', 'Check out our presentation at ICML for more details: \n\nhttps://t.co/kFyNd7rYYe']",21,07,1520 |
4,60,1471052426413322240,1273473804438667265,Haritz Puerto,"New preprint!! How to combine pretrained QA agents to create a multi-agent model that can solve any question? Check our paper ""MetaQA: Combining Expert Agents for Multi-Skill Question Answering"" Paper: <LINK> w/ @gozde_gul_sahin and @IGurevych at @UKPLab #NLProc <LINK> @gozde_gul_sahin @IGurevych @UKPLab Code: <LINK> #MachineLearning #NLProc",https://arxiv.org/abs/2112.01922,"The recent explosion of question answering (QA) datasets and models has increased the interest in the generalization of models across multiple domains and formats by either training on multiple datasets or by combining multiple models. Despite the promising results of multi-dataset models, some domains or QA formats may require specific architectures, and thus the adaptability of these models might be limited. In addition, current approaches for combining models disregard cues such as question-answer compatibility. In this work, we propose to combine expert agents with a novel, flexible, and training-efficient architecture that considers questions, answer predictions, and answer-prediction confidence scores to select the best answer among a list of answer candidates. Through quantitative and qualitative experiments we show that our model i) creates a collaboration between agents that outperforms previous multi-agent and multi-dataset approaches in both in-domain and out-of-domain scenarios, ii) is highly data-efficient to train, and iii) can be adapted to any QA format. We release our code and a dataset of answer predictions from expert agents for 16 QA datasets to foster future developments of multi-agent systems on this https URL ",MetaQA: Combining Expert Agents for Multi-Skill Question Answering,2,"['New preprint!! How to combine pretrained QA agents to create a multi-agent model that can solve any question?\n\nCheck our paper ""MetaQA: Combining Expert Agents for Multi-Skill Question Answering""\nPaper: <LINK>\nw/ @gozde_gul_sahin and @IGurevych at @UKPLab #NLProc <LINK>', '@gozde_gul_sahin @IGurevych @UKPLab Code: https://t.co/0R18KO4Lif #MachineLearning #NLProc']",21,12,343 |
5,27,1443225724559806467,356676252,John Regan,"New paper on the @arxiv - <LINK> Some of the most massive first stars in the Universe were red and very bright! Tyrone Woods led a project where we looked at the emission spectrum from rapidly accreting massive stars and the great news is that @JWSTObserver <LINK> should be able to see these distant early massive galaxies (with a bit of luck). Thanks to all of the co-authors as always: Tyrone Woods (lead author), Chris Willott, @AstroAhura @TurloughDownes @bwoshea & Mike Norman @MUTheorPhys @royalsociety @scienceirel <LINK>",https://arxiv.org/abs/2109.13279,"Identifying stars formed in pristine environments (Pop III) within the first billion years is vital to uncovering the earliest growth and chemical evolution of galaxies. Pop III galaxies, however, are typically expected to be too faint and too few in number to be detectable by forthcoming instruments without extremely long integration times and/or extreme lensing. In an environment, however, where star formation is suppressed until a halo crosses the atomic cooling limit (e.g., by a modest Lyman-Werner flux, high baryonic streaming velocities, and/or dynamical heating effects),primordial halos can form substantially more numerous and more massive stars. Some of these stars will in-turn be accreting more rapidly than they can thermally relax at any given time. Using high resolution cosmological zoom-in simulations of massive star formation in high-z halos, we find that such rapidly accreting stars produce prominent spectral features which would be detectable by {\it JWST}. The rapid accretion episodes within the halo lead to stochastic reprocessing of 0--20\% of the total stellar emission into the rest-frame optical over long timescales, a unique signature which may allow deep observations to identify such objects out to $z \sim 10-13$ using mid- and wide-band NIRCam colors alone. ","Some First Stars Were Red: Detecting Signatures of Massive Population |
III Formation Through Long-Term Stochastic Color Variations",2,"['New paper on the @arxiv - <LINK> Some of the most massive first stars in the Universe were red and very bright! Tyrone Woods led a project where we looked at the emission spectrum from rapidly accreting massive stars and the great news is that @JWSTObserver <LINK>', 'should be able to see these distant early massive galaxies (with a bit of luck). Thanks to all of the co-authors as always: Tyrone Woods (lead author), Chris Willott, @AstroAhura @TurloughDownes @bwoshea & Mike Norman @MUTheorPhys @royalsociety @scienceirel https://t.co/fedKi7r0Bg']",21,09,529 |
6,79,1260533396310196225,138395156,Professor Sondipon Adhikari,"Our new paper from a collaboration between @SU_engIMPACT and U. of British Columbia entitled ""Machine learning based #digitaltwins for dynamical systems with multiple time-scales"" is now available for download from <LINK>. Please send your comments/suggestions. <LINK>",http://arxiv.org/abs/2005.05862,"Digital twin technology has a huge potential for widespread applications in different industrial sectors such as infrastructure, aerospace, and automotive. However, practical adoptions of this technology have been slower, mainly due to a lack of application-specific details. Here we focus on a digital twin framework for linear single-degree-of-freedom structural dynamic systems evolving in two different operational time scales in addition to its intrinsic dynamic time-scale. Our approach strategically separates into two components -- (a) a physics-based nominal model for data processing and response predictions, and (b) a data-driven machine learning model for the time-evolution of the system parameters. The physics-based nominal model is system-specific and selected based on the problem under consideration. On the other hand, the data-driven machine learning model is generic. For tracking the multi-scale evolution of the system parameters, we propose to exploit a mixture of experts as the data-driven model. Within the mixture of experts model, Gaussian Process (GP) is used as the expert model. The primary idea is to let each expert track the evolution of the system parameters at a single time-scale. For learning the hyperparameters of the `mixture of experts using GP', an efficient framework the exploits expectation-maximization and sequential Monte Carlo sampler is used. Performance of the digital twin is illustrated on a multi-timescale dynamical system with stiffness and/or mass variations. The digital twin is found to be robust and yields reasonably accurate results. One exciting feature of the proposed digital twin is its capability to provide reasonable predictions at future time-steps. Aspects related to the data quality and data quantity are also investigated. ","Machine learning based digital twin for dynamical systems with multiple |
time-scales",1,"['Our new paper from a collaboration between @SU_engIMPACT and U. of British Columbia entitled ""Machine learning based #digitaltwins for dynamical systems with multiple time-scales"" is now available for download from <LINK>. Please send your comments/suggestions. <LINK>']",20,05,268 |
7,16,1290085946101178368,1196266674954985472,Nirmal Raj,"New paper with @GrahamKribs and @davemckeen on hunting new forces in Nature by breaking up protons! <LINK> Thread follows. <LINK> Everyone knows the Russian doll-like assembly of matter: atom > nucleus > protons & neutrons > quarks & gluons. This knowledge was gained over the last century using special microscopy: smashing these objects hard with a projectile particle. <LINK> Per Heisenberg's uncertainty principle, transfer more momentum to the target and you resolve it better. The world within protons is still probed this way by shattering them with electrons in particle colliders. <LINK> The projectile electron strikes the target quark via the electromagnetic force carried by the photon; similar strikes occur at smaller distance scales (i.e. ""deep inside"" the proton) via the weak force carried by the Z boson. <LINK> Four fundamental forces seem to govern all natural phenomena: gravity, electromagnetism, the weak force, and the strong. Is there a fifth force? There are compelling reasons for one to exist -- in any case it'd be good to look in settings where it might show up. In a minimal theoretical setup, the electrons and quarks in the proton-breaking experiment should feel this new force in addition to electromagnetism and the weak force. <LINK> That'd alter the results of these experiments. We exploit that to place generic constraints on a new force-carrier often called the ""dark photon"". We show that the latest proton-breaking collider HERA and the future LHeC could glimpse dark photons of up to 5-100 TeV masses! <LINK>",https://arxiv.org/abs/2007.15655,"Deep inelastic scattering of $e^{\pm}$ off protons is sensitive to contributions from ""dark photon"" exchange. Using HERA data fit to HERA's parton distribution functions, we obtain the model-independent bound $\epsilon \lesssim 0.02$ on the kinetic mixing between hypercharge and the dark photon for dark photon masses $\lesssim 10$ GeV. This slightly improves on the bound obtained from electroweak precision observables. For higher masses the limit weakens monotonically; $\epsilon \lesssim 1$ for a dark photon mass of $5$ TeV. Utilizing PDF sum rules, we demonstrate that the effects of the dark photon cannot be (trivially) absorbed into re-fit PDFs, and in fact lead to non-DGLAP (Bjorken $x_{\rm B}$-independent) scaling violations that could provide a smoking gun in data. The proposed $e^\pm p$ collider operating at $\sqrt{s} = 1.3$ TeV, LHeC, is anticipated to accumulate $10^3$ times the luminosity of HERA, providing substantial improvements in probing the effects of a dark photon: sensitivity to $\epsilon$ well below that probed by electroweak precision data is possible throughout virtually the entire dark photon mass range, as well as being able to probe to much higher dark photon masses, up to $100$ TeV. ",Breaking up the Proton: An Affair with Dark Forces,7,"['New paper with @GrahamKribs and @davemckeen on hunting new forces in Nature by breaking up protons! <LINK> Thread follows. <LINK>', 'Everyone knows the Russian doll-like assembly of matter: atom > nucleus > protons & neutrons > quarks & gluons. This knowledge was gained over the last century using special microscopy: smashing these objects hard with a projectile particle. https://t.co/7f6Hs96SYI', ""Per Heisenberg's uncertainty principle, transfer more momentum to the target and you resolve it better. The world within protons is still probed this way by shattering them with electrons in particle colliders. https://t.co/gHDXJphkqb"", 'The projectile electron strikes the target quark via the electromagnetic force carried by the photon; similar strikes occur at smaller distance scales (i.e. ""deep inside"" the proton) via the weak force carried by the Z boson. https://t.co/Ye24ZP0ebQ', ""Four fundamental forces seem to govern all natural phenomena: gravity, electromagnetism, the weak force, and the strong. Is there a fifth force? There are compelling reasons for one to exist -- in any case it'd be good to look in settings where it might show up."", 'In a minimal theoretical setup, the electrons and quarks in the proton-breaking experiment should feel this new force in addition to electromagnetism and the weak force. https://t.co/mFpLI1ANwm', 'That\'d alter the results of these experiments. We exploit that to place generic constraints on a new force-carrier often called the ""dark photon"". We show that the latest proton-breaking collider HERA and the future LHeC could glimpse dark photons of up to 5-100 TeV masses! https://t.co/HMQvxYZFcA']",20,07,1560 |
8,30,1221714698812477440,882303076505456642,Timon Emken,"Fresh off the press: Our new #paper on next-generation's direct sub-GeV #DarkMatter searches with semiconductors on today's @arxiv. <LINK> We study future experiments' discovery reaches using frequentist and #MonteCarlo methods. <LINK> @arxiv The paper's emphasis is on the improved statistical treatment, the background modeling, and a study of astrophysical uncertainties, e.g. of the local escape velocity of our galaxy. Note: The figure does not show projected constraints, but discovery reaches of future experiments. <LINK> @arxiv It is the hard work of four Bachelor students here at @ChalmersPhysics that made this work possible. @BradleyKavanagh @arxiv Thanks! Yeah, I might as well embrace the weird arXiv numbers sooner than later.",https://arxiv.org/abs/2001.08910,"We compute the projected sensitivity to dark matter (DM) particles in the sub-GeV mass range of future direct detection experiments using germanium and silicon semiconductor targets. We perform this calculation within the dark photon model for DM-electron interactions using the likelihood ratio as a test statistic, Monte Carlo simulations, and background models that we extract from recent experimental data. We present our results in terms of DM-electron scattering cross section values required to reject the background only hypothesis in favour of the background plus DM signal hypothesis with a statistical significance, $\mathcal{Z}$, corresponding to 3 or 5 standard deviations. We also test the stability of our conclusions under changes in the astrophysical parameters governing the local space and velocity distribution of DM in the Milky Way. In the best-case scenario, when a high-voltage germanium detector with an exposure of $50$ kg-year and a CCD silicon detector with an exposure of $1$ kg-year and a dark current rate of $1\times10^{-7}$ counts/pixel/day have simultaneously reported a DM signal, we find that the smallest cross section value compatible with $\mathcal{Z}=3$ ($\mathcal{Z}=5$) is about $8\times10^{-42}$ cm$^2$ ($1\times10^{-41}$ cm$^2$) for contact interactions, and $4\times10^{-41}$ cm$^2$ ($7\times10^{-41}$ cm$^2$) for long-range interactions. Our sensitivity study extends and refine previous works in terms of background models, statistical methods, and treatment of the underlying astrophysical uncertainties. ","Projected sensitivity to sub-GeV dark matter of next-generation |
semiconductor detectors",4,"[""Fresh off the press: Our new #paper on next-generation's direct sub-GeV #DarkMatter searches with semiconductors on today's @arxiv.\n\n<LINK>\n\nWe study future experiments' discovery reaches using frequentist and #MonteCarlo methods. <LINK>"", ""@arxiv The paper's emphasis is on the improved statistical treatment, the background modeling, and a study of astrophysical uncertainties, e.g. of the local escape velocity of our galaxy.\n\nNote: The figure does not show projected constraints, but discovery reaches of future experiments. https://t.co/BRCsbnwS3j"", '@arxiv It is the hard work of four Bachelor students here at @ChalmersPhysics that made this work possible.', '@BradleyKavanagh @arxiv Thanks!\n\nYeah, I might as well embrace the weird arXiv numbers sooner than later.']",20,01,742 |
9,38,1133537573366747136,863828060600139776,Dr. Deep Anand,"check out our new paper on the motions of galaxies in the local Universe! (<LINK>) <LINK> mini-summary: The Milky Way lies in a thin plane (the Local Sheet) right near a large void (the Local Void). Here we test the idea that voids expand by measuring the peculiar velocities of galaxies in the direction opposite the void. We received near-infrared Hubble observations for four dwarf galaxies (the binary galaxy HIZSS-003 pictured here) to measure their tip of the red giant branch distances and reconstruct their motions. <LINK> We would expect these galaxies to have a negative peculiar velocity (towards us, blue in the plot), which would be an imprint of our downward motion towards these galaxies and away from the Local Void. This is consistent with what we find! (galaxies labelled 1-4). <LINK> However, we did find one galaxy (GALFA-DW4) in the antivoid direction that may have a positive peculiar velocity, where the expectation is the exact opposite. Unfortunately the present Hubble data is not deep enough to nail down a robust distance, so this matter is still TBD. <LINK> We insert these galaxies into our group's numerical action methods model (Shaya et al. 2017) to reconstruct their full 3D motions. Here we see that most galaxies in the Local Volume participate in a few key flow patterns <LINK> - i.e. 1) Evacuation from the Local Void (at +SGZ), 2) a flow towards Virgo and the Great Attractor (+SGY, -SGX) , 3) only modest peculiar velocities for members of the Local Sheet",https://arxiv.org/abs/1905.11416,"The Milky Way lies in a thin plane, the Local Sheet, a part of a wall bounding the Local Void lying toward the north supergalactic pole. Galaxies with accurate distances both above and below this supergalactic equatorial plane have systematically negative peculiar velocities. The interpretation of this situation is that the Local Void is expanding, giving members of the Local Sheet deviant velocities toward the south supergalactic pole. The few galaxies within the void are evacuating the void. Galaxies in a filament in the supergalactic south are not feeling the expansion so their apparent motion toward us is mainly a reflex of our motion. The model of the local velocity field was uncertain because the apex of our motion away from the Local Void lies in obscurity near the Galactic plane. Here, results of Hubble Space Telescope infrared observations are reported that find tip of the red giant branch distances to four obscured galaxies. All the distances are $\sim7$ Mpc, revealing that these galaxies are part of a continuous filamentary structure passing between the north and south Galactic hemispheres and sharing the same kinematic signature of peculiar velocities toward us. A fifth galaxy nearby in projection, GALFA-DW4, has an ambiguous distance. If nearby at $\sim 3$ Mpc, this galaxy has an anomalous velocity away from us of +348 km/s. Alternatively, perhaps the resolved stars are on the asymptotic giant branch and the galaxy is beyond 6 Mpc whence the observed velocity would not be unusual. ",Peculiar Velocities of Galaxies Just Beyond the Local Group,7,"['check out our new paper on the motions of galaxies in the local Universe! \n\n(<LINK>) <LINK>', 'mini-summary: The Milky Way lies in a thin plane (the Local Sheet) right near a large void (the Local Void). Here we test the idea that voids expand by measuring the peculiar velocities of galaxies in the direction opposite the void.', 'We received near-infrared Hubble observations for four dwarf galaxies (the binary galaxy HIZSS-003 pictured here) to measure their tip of the red giant branch distances and reconstruct their motions. https://t.co/8pW4kmymVA', 'We would expect these galaxies to have a negative peculiar velocity (towards us, blue in the plot), which would be an imprint of our downward motion towards these galaxies and away from the Local Void. \nThis is consistent with what we find! (galaxies labelled 1-4). https://t.co/dXmgUBR173', 'However, we did find one galaxy (GALFA-DW4) in the antivoid direction that may have a positive peculiar velocity, where the expectation is the exact opposite. Unfortunately the present Hubble data is not deep enough to nail down a robust distance, so this matter is still TBD. https://t.co/3THPg74KPx', ""We insert these galaxies into our group's numerical action methods model (Shaya et al. 2017) to reconstruct their full 3D motions. Here we see that most galaxies in the Local Volume participate in \na few key flow patterns https://t.co/DDWh8M8Aax"", '- i.e. 1) Evacuation from the Local Void (at +SGZ), 2) a flow towards Virgo and the Great Attractor (+SGY, -SGX) , 3) only modest peculiar velocities for members of the Local Sheet']",19,05,1496 |
10,54,1151326890331971585,4750366201,Bryan Ostdiek,Very pleased to announce our new paper using #MachineLearning to tell what stars are accreted onto the Milky Way vs those that were born in situ in the @ESAGaia data. See what we find in the data tomorrow! <LINK> @ESAGaia With @linoush95 @SheaGKosmo @AndrewWetzel @astrorobyn @PFHopkins_Astro,https://arxiv.org/abs/1907.06652,"The goal of this study is to present the development of a machine learning based approach that utilizes phase space alone to separate the Gaia DR2 stars into two categories: those accreted onto the Milky Way from those that are in situ. Traditional selection methods that have been used to identify accreted stars typically rely on full 3D velocity, metallicity information, or both, which significantly reduces the number of classifiable stars. The approach advocated here is applicable to a much larger portion of Gaia DR2. A method known as ""transfer learning"" is shown to be effective through extensive testing on a set of mock Gaia catalogs that are based on the FIRE cosmological zoom-in hydrodynamic simulations of Milky Way-mass galaxies. The machine is first trained on simulated data using only 5D kinematics as inputs and is then further trained on a cross-matched Gaia/RAVE data set, which improves sensitivity to properties of the real Milky Way. The result is a catalog that identifies around 767,000 accreted stars within Gaia DR2. This catalog can yield empirical insights into the merger history of the Milky Way and could be used to infer properties of the dark matter distribution. ",Cataloging Accreted Stars within Gaia DR2 using Deep Learning,2,"['Very pleased to announce our new paper using #MachineLearning to tell what stars are accreted onto the Milky Way vs those that were born in situ in the \n@ESAGaia\n data. See what we find in the data tomorrow! <LINK>', '@ESAGaia With @linoush95 @SheaGKosmo @AndrewWetzel @astrorobyn @PFHopkins_Astro']",19,07,292 |
11,126,1154295187121876997,841703218979733505,Andrei Igoshev,In our recent research <LINK> using @ESAGaia we find that the timescale of common envelope differs for hot subdwarf stars and CVs. Mass ejection happens very fast (~100 years) for sdBs and it takes much longer for cataclysmic variables (~10000 years).,https://arxiv.org/abs/1907.10068,"Evolution of close binaries often proceeds through the common envelope stage. The physics of the envelope ejection (CEE) is not yet understood, and several mechanisms were suggested to be involved. These could give rise to different timescales for the CEE mass-loss. In order to probe the CEE-timescales we study wide companions to post-CE binaries. Faster mass-loss timescales give rise to higher disruption rates of wide binaries and result in larger average separations. We make use of data from Gaia DR2 to search for ultra-wide companions (projected separations $10^3$-$2\times 10^5$ a.u. and $M_2 > 0.4$ M$_\odot$) to several types of post-CEE systems, including sdBs, white-dwarf post-common binaries, and cataclysmic variables. We find a (wide-orbit) multiplicity fraction of $1.4\pm 0.2$ per cent for sdBs to be compared with a multiplicity fraction of $5.0\pm 0.2$ per cent for late-B/A/F stars which are possible sdB progenitors. The distribution of projected separations of ultra-wide pairs to main sequence stars and sdBs differs significantly and is compatible with prompt mass loss (upper limit on common envelope ejection timescale of $10^2$ years). The smaller statistics of ultra-wide companions to cataclysmic variables and post-CEE binaries provide weaker constraints. Nevertheless, the survival rate of ultra-wide pairs to the cataclysmic variables suggest much longer, $\sim10^4$ years timescales for the CEE in these systems, possibly suggesting non-dynamical CEE in this regime. ","Inferred timescales for common envelope ejection using wide astrometric |
companions",1,['In our recent research <LINK> using @ESAGaia we find that the timescale of common envelope differs for hot subdwarf stars and CVs. Mass ejection happens very fast (~100 years) for sdBs and it takes much longer for cataclysmic variables (~10000 years).'],19,07,251 |
12,99,1372116012309549058,187336449,Luis Welbanks,"Advanced modeling frameworks are essential to reliably extract atmospheric properties from the data of today and tomorrow. Check out our new paper introducing Aurora, a next-generation generalized retrieval framework. Today on the arXiv <LINK> <LINK> @V_Parmentier I'm glad you like the logo! All credit goes to Amanda Smith at the IoA. She is the design expert and I had to share her amazing work :) As for the exo-Aurora, we'll work on that ;) Hope you like the paper!",http://arxiv.org/abs/2103.08600,"Atmospheric retrievals of exoplanetary transmission spectra provide important constraints on various properties such as chemical abundances, cloud/haze properties, and characteristic temperatures, at the day-night atmospheric terminator. To date, most spectra have been observed for giant exoplanets due to which retrievals typically assume H-rich atmospheres. However, recent observations of mini-Neptunes/super-Earths, and the promise of upcoming facilities including JWST, call for a new generation of retrievals that can address a wide range of atmospheric compositions and related complexities. Here we report Aurora, a next-generation atmospheric retrieval framework that builds upon state-of-the-art architectures and incorporates the following key advancements: a) a generalised compositional retrieval allowing for H-rich and H-poor atmospheres, b) a generalised prescription for inhomogeneous clouds/hazes, c) multiple Bayesian inference algorithms for high-dimensional retrievals, d) modular considerations for refraction, forward scattering, and Mie-scattering, and e) noise modeling functionalities. We demonstrate Aurora on current and/or synthetic observations of hot Jupiter HD209458b, mini-Neptune K218b, and rocky exoplanet TRAPPIST1d. Using current HD209458b spectra, we demonstrate the robustness of our framework and cloud/haze prescription against assumptions of H-rich/H-poor atmospheres, improving on previous treatments. Using real and synthetic spectra of K218b, we demonstrate the agnostic approach to confidently constrain its bulk atmospheric composition and obtain precise abundance estimates. For TRAPPIST1d, 10 JWST NIRSpec transits can enable identification of the main atmospheric component for cloud-free CO$_2$-rich and N$_2$-rich atmospheres, and abundance constraints on trace gases including initial indications of O$_3$ if present at enhanced levels ($\sim$10-100x Earth levels). ","Aurora: A Generalised Retrieval Framework for Exoplanetary Transmission |
Spectra",2,"['Advanced modeling frameworks are essential to reliably extract atmospheric properties from the data of today and tomorrow. Check out our new paper introducing Aurora, a next-generation generalized retrieval framework. Today on the arXiv <LINK> <LINK>', ""@V_Parmentier I'm glad you like the logo! All credit goes to Amanda Smith at the IoA. She is the design expert and I had to share her amazing work :)\n\nAs for the exo-Aurora, we'll work on that ;) Hope you like the paper!""]",21,03,470 |
13,65,1070144193362870272,954465907539152897,Dr. Doctor,"New paper from our team analyzing our Dark Energy Camera data from optical follow-up of GW170814! Check it out! <LINK> Spoiler: We didn't find any associated candidates, but we have done the most comprehensive search for optical emission from binary-black-hole mergers in terms of gravitational-wave sky-map coverage!",http://arxiv.org/abs/1812.01579,"Binary black hole (BBH) mergers found by the LIGO and Virgo detectors are of immense scientific interest to the astrophysics community, but are considered unlikely to be sources of electromagnetic emission. To test whether they have rapidly fading optical counterparts, we used the Dark Energy Camera to perform an $i$-band search for the BBH merger GW170814, the first gravitational wave detected by three interferometers. The 87-deg$^2$ localization region (at 90\% confidence) centered in the Dark Energy Survey (DES) footprint enabled us to image 86\% of the probable sky area to a depth of $i\sim 23$ mag and provide the most comprehensive dataset to search for EM emission from BBH mergers. To identify candidates, we perform difference imaging with our search images and with templates from pre-existing DES images. The analysis strategy and selection requirements were designed to remove supernovae and to identify transients that decline in the first two epochs. We find two candidates, each of which is spatially coincident with a star or a high-redshift galaxy in the DES catalogs, and they are thus unlikely to be associated with GW170814. Our search finds no candidates associated with GW170814, disfavoring rapidly declining optical emission from BBH mergers brighter than $i\sim 23$ mag ($L_{\rm optical} \sim 5\times10^{41}$ erg/s) 1-2 days after coalescence. In terms of GW sky map coverage, this is the most complete search for optical counterparts to BBH mergers to date ","A Search for Optical Emission from Binary-Black-Hole Merger GW170814 |
with the Dark Energy Camera",2,"['New paper from our team analyzing our Dark Energy Camera data from optical follow-up of GW170814! Check it out!\n<LINK>', ""Spoiler: We didn't find any associated candidates, but we have done the most comprehensive search for optical emission from binary-black-hole mergers in terms of gravitational-wave sky-map coverage!""]",18,12,317 |
14,53,1296506144022769666,29997510,Neel Dey,"It never ceases to surprise just how versatile diffeomorphic registration is. In a new #MICCAI OMIA workshop paper with Guillaume Gisbert, we denoise clinical-grade #OCT images where repeat scans with large nonlinear deformations are commonly acquired: <LINK> <LINK> A hybrid registration+unsupervised deep denoising approach outperforms classic unsupervised denoising methods. Of course, the classic methods don't require repeats, so there's the catch. <LINK>",https://arxiv.org/abs/2008.08024,"Optical Coherence Tomography (OCT) is pervasive in both the research and clinical practice of Ophthalmology. However, OCT images are strongly corrupted by noise, limiting their interpretation. Current OCT denoisers leverage assumptions on noise distributions or generate targets for training deep supervised denoisers via averaging of repeat acquisitions. However, recent self-supervised advances allow the training of deep denoising networks using only repeat acquisitions without clean targets as ground truth, reducing the burden of supervised learning. Despite the clear advantages of self-supervised methods, their use is precluded as OCT shows strong structural deformations even between sequential scans of the same subject due to involuntary eye motion. Further, direct nonlinear alignment of repeats induces correlation of the noise between images. In this paper, we propose a joint diffeomorphic template estimation and denoising framework which enables the use of self-supervised denoising for motion deformed repeat acquisitions, without empirically registering their noise realizations. Strong qualitative and quantitative improvements are achieved in denoising OCT images, with generic utility in any imaging modality amenable to multiple exposures. ","Self-supervised Denoising via Diffeomorphic Template Estimation: |
Application to Optical Coherence Tomography",2,"['It never ceases to surprise just how versatile diffeomorphic registration is.\n\nIn a new #MICCAI OMIA workshop paper with Guillaume Gisbert, we denoise clinical-grade #OCT images where repeat scans with large nonlinear deformations are commonly acquired: <LINK> <LINK>', ""A hybrid registration+unsupervised deep denoising approach outperforms classic unsupervised denoising methods. Of course, the classic methods don't require repeats, so there's the catch. https://t.co/lI79IKlW8X""]",20,08,460 |
15,34,1420111695364640768,1305574724743974912,Neeraja Gupta,"📢New Working Paper📢 (w. @AlistairEcon & @rigotti_luca) Social scientists now have multiple populations from which they can collect data: which one should they use? The answer depends on cost, convenience, and data quality: The Experimenter's Dilemma. <LINK> 1/N Main idea: Fix a budget and form the experimenter's preference (stat. power) over: (i) observation cost; (ii) attenuation of effect size. Online populations offer low costs, but noise can attenuate treatment effects. Lab more expensive, but potentially higher quality data. 2/N <LINK> Fixing an experimental budget on each population and using games with no effective tension to measure noise, we find Prolific should be dominant. However, even at 60% noise MTurk is still potentially better than the lab due to very cheap observations. 3/N <LINK> However, moving to a static over two prisoner's dilemma games the lab actually performs better: both Prolific and MTurk exhibit almost zero elasticity of response here. 4/N <LINK> Conclusions: MTurk is dominated by Prolific, where the higher noise is not worth the reduced cost. However, for our more subtle social dilemma comparison, the physical lab has an important role despite high observation costs, with greater sensitivity to induced payoffs. 5/5 @Danielf_Parra @m_serra_garcia @AlistairEcon @rigotti_luca Thanks Daniel! We are pretty excited about it too :)",https://arxiv.org/abs/2107.05064,"We compare three populations commonly used in experiments by economists and other social scientists: undergraduate students at a physical location (lab), Amazon's Mechanical Turk (MTurk), and Prolific. The comparison is made along three dimensions: the noise in the data due to inattention, the cost per observation, and the elasticity of response. We draw samples from each population, examining decisions in four one-shot games with varying tensions between the individual and socially efficient choices. When there is no tension, where individual and pro-social incentives coincide, noisy behavior accounts for 60% of the observations on MTurk, 19% on Prolific, and 14% for the lab. Taking costs into account, if noisy data is the only concern Prolific dominates from an inferential power point of view, combining relatively low noise with a cost per observation one fifth of the lab's. However, because the lab population is more sensitive to treatment, across our main PD game comparison the lab still outperforms both Prolific and MTurk. ",The Experimenters' Dilemma: Inferential Preferences over Populations,6,"[""📢New Working Paper📢\n(w. @AlistairEcon & @rigotti_luca)\nSocial scientists now have multiple populations from which they can collect data: which one should they use? The answer depends on cost, convenience, and data quality: The Experimenter's Dilemma.\n<LINK>\n1/N"", ""Main idea: Fix a budget and form the experimenter's preference (stat. power) over: (i) observation cost; (ii) attenuation of effect size. Online populations offer low costs, but noise can attenuate treatment effects. Lab more expensive, but potentially higher quality data.\n2/N https://t.co/ryNvNX84Vn"", 'Fixing an experimental budget on each population and using games with no effective tension to measure noise, we find Prolific should be dominant. However, even at 60% noise MTurk is still potentially better than the lab due to very cheap observations.\n3/N https://t.co/oL4WK1kc8Z', ""However, moving to a static over two prisoner's dilemma games the lab actually performs better: both Prolific and MTurk exhibit almost zero elasticity of response here.\n4/N https://t.co/aCKbjL3D4B"", 'Conclusions: MTurk is dominated by Prolific, where the higher noise is not worth the reduced cost. However, for our more subtle social dilemma comparison, the physical lab has an important role despite high observation costs, with greater sensitivity to induced payoffs.\n5/5', '@Danielf_Parra @m_serra_garcia @AlistairEcon @rigotti_luca Thanks Daniel! We are pretty excited about it too :)']",21,07,1376 |
16,220,1512029224571445256,1669743397,Michael Alexander Riegler,Are visual explanations of #AI algorithms helpful for #medical doctors? Check our preprint where we conducted a study involving a large number of #gastroenterologists <LINK> @Strumke @sravmd @ThomasdeLange1 @AndreaStoras @vlbthambawita @simula_research,https://arxiv.org/abs/2204.00617,"Deep learning has in recent years achieved immense success in all areas of computer vision and has the potential of assisting medical doctors in analyzing visual content for disease and other abnormalities. However, the current state of deep learning is very much a black box, making medical professionals highly skeptical about integrating these methods into clinical practice. Several methods have been proposed in order to shine some light onto these black boxes, but there is no consensus on the opinion of the medical doctors that will consume these explanations. This paper presents a study asking medical doctors about their opinion of current state-of-the-art explainable artificial intelligence methods when applied to a gastrointestinal disease detection use case. We compare two different categories of explanation methods, intrinsic and extrinsic, and gauge their opinion of the current value of these explanations. The results indicate that intrinsic explanations are preferred and that explanation. ","Visual explanations for polyp detection: How medical doctors assess |
intrinsic versus extrinsic explanations",1,['Are visual explanations of #AI algorithms helpful for #medical doctors? Check our preprint where we conducted a study involving a large number of #gastroenterologists <LINK> @Strumke @sravmd @ThomasdeLange1 @AndreaStoras @vlbthambawita @simula_research'],22,04,252 |
17,8,1445350214127206404,1280041792122093568,Yi-Ling Chung,"New #ArgMining #NLProc paper 🤗 🚨Multilingual Counter Narrative Type Classification 🚨 is now out: <LINK> Joint work with @m_guerini and @ragerri A short thread: @m_guerini @ragerri Multilingual and diverse Counter Narratives (CNs) are needed to fight online hate and develop automatic CN evaluation metrics. We conducted CN type classification for EN, IT and FR, evaluating SoTA pre-trained LMs in monolingual, multilingual and cross-lingual settings. Some key findings: 1) The performance is promising for the majority classes (facts, question, denouncing) 2) Classifying humor is still challenging especially for Italian and French, due to the use of figurative language and few instances in training data 3) Combining training data from the three source languages improves performance over the monolingual evaluation 4) The best overall results are obtained if we translate every language to English before cross-lingual prediction",https://arxiv.org/abs/2109.13664,"The growing interest in employing counter narratives for hatred intervention brings with it a focus on dataset creation and automation strategies. In this scenario, learning to recognize counter narrative types from natural text is expected to be useful for applications such as hate speech countering, where operators from non-governmental organizations are supposed to answer to hate with several and diverse arguments that can be mined from online sources. This paper presents the first multilingual work on counter narrative type classification, evaluating SoTA pre-trained language models in monolingual, multilingual and cross-lingual settings. When considering a fine-grained annotation of counter narrative classes, we report strong baseline classification results for the majority of the counter narrative types, especially if we translate every language to English before cross-lingual prediction. This suggests that knowledge about counter narratives can be successfully transferred across languages. ",Multilingual Counter Narrative Type Classification,4,"['New #ArgMining #NLProc paper 🤗\n\n🚨Multilingual Counter Narrative Type Classification 🚨\n\nis now out: <LINK>\n\nJoint work with @m_guerini and @ragerri\n\nA short thread:', '@m_guerini @ragerri Multilingual and diverse Counter Narratives (CNs) are needed to fight online hate and develop automatic CN evaluation metrics.\n\nWe conducted CN type classification for EN, IT and FR, evaluating SoTA pre-trained LMs in monolingual, multilingual and cross-lingual settings.', 'Some key findings:\n\n1) The performance is promising for the majority classes (facts, question, denouncing)\n\n2) Classifying humor is still challenging especially for Italian and French, due to the use of figurative language and few instances in training data', '3) Combining training data from the three source languages improves performance over the monolingual evaluation\n\n4) The best overall results are obtained if we translate every language to English before cross-lingual prediction']",21,09,933 |
18,25,1333403358917496838,887278045761077248,Dmytro Mishkin 🇺🇦,"Efficient Initial Pose-graph Generation for Global SfM <LINK> Our new paper without deep learning invoved! Tl;dr: Bag of tricks for 7x speed-up camera pose graph generation (29 hours for 402 130 image pairs vs 202 hours originally) Details in the thread 1/6 <LINK> Idea 1: Do not match hard pairs, unless necessary. Implementation: - calculate global similarity with GeM retrieval - use A* to find the easiest path between not-yet-matched pairs via already matched ones. Get the camera pose estimation from graph. Verify by guided matching 2/6 <LINK> Idea 2: to get the correspondence for the A*-based pose, use guided matching. Implementation: Get the tentative corresponces using known F. Verify by SIFT distance ratio (among only epipolar consistent only, ~2..30). 3/6 <LINK> Idea 2 does not work naively: default Lowe SNN ratio of 0.8..0.9 is too permissive. Solution: Adaptive SNN ratio based on the number of descriptor pool 4/6 <LINK> Idea 3: if some features useful for match image A, they are are likely to be useful to match image B as well and that is stronger signal than Lowe SNN ratio. Use it to speed-up the RANSAC via PROSAC sampling 5/6 <LINK> All these things together lead to almost 7x speed-up of getting the initial pose graph for the global SfM bundle adjustment without compromise in quality: see exhaustive matching (EM) vs A*+EH. 6/6 <LINK>",https://arxiv.org/abs/2011.11986,"We propose ways to speed up the initial pose-graph generation for global Structure-from-Motion algorithms. To avoid forming tentative point correspondences by FLANN and geometric verification by RANSAC, which are the most time-consuming steps of the pose-graph creation, we propose two new methods - built on the fact that image pairs usually are matched consecutively. Thus, candidate relative poses can be recovered from paths in the partly-built pose-graph. We propose a heuristic for the A* traversal, considering global similarity of images and the quality of the pose-graph edges. Given a relative pose from a path, descriptor-based feature matching is made ""light-weight"" by exploiting the known epipolar geometry. To speed up PROSAC-based sampling when RANSAC is applied, we propose a third method to order the correspondences by their inlier probabilities from previous estimations. The algorithms are tested on 402130 image pairs from the 1DSfM dataset and they speed up the feature matching 17 times and pose estimation 5 times. ",Efficient Initial Pose-graph Generation for Global SfM,6,"['Efficient Initial Pose-graph Generation for Global SfM\n<LINK>\n\nOur new paper without deep learning invoved!\n\nTl;dr: Bag of tricks for 7x speed-up camera pose graph generation (29 hours for 402 130 image pairs vs 202 hours originally)\n\nDetails in the thread 1/6 <LINK>', 'Idea 1: Do not match hard pairs, unless necessary. Implementation:\n- calculate global similarity with GeM retrieval\n- use A* to find the easiest path between not-yet-matched pairs via already matched ones. Get the camera pose estimation from graph. Verify by guided matching 2/6 https://t.co/aRsIkPbz4u', 'Idea 2: to get the correspondence for the A*-based pose, use guided matching.\nImplementation: Get the tentative corresponces using known F. Verify by SIFT distance ratio (among only epipolar consistent only, ~2..30).\n3/6 https://t.co/BaLzo1X9J2', 'Idea 2 does not work naively: default Lowe SNN ratio of 0.8..0.9 is too permissive. Solution: Adaptive SNN ratio based on the number of descriptor pool \n4/6 https://t.co/aMqfiFZLfU', 'Idea 3: if some features useful for match image A, they are are likely to be useful to match image B as well and that is stronger signal than Lowe SNN ratio. \nUse it to speed-up the RANSAC via PROSAC sampling\n\n5/6 https://t.co/pg2pIt30o4', 'All these things together lead to almost 7x speed-up of getting the initial pose graph for the global SfM bundle adjustment without compromise in quality: see exhaustive matching (EM) vs A*+EH.\n6/6 https://t.co/dQb5vCOjjJ']",20,11,1365 |
19,39,862243144808767488,3716338821,Mikko Tuomi,"New paper of @phillippro: ""Agatha: disentangling periodic signals from correlated noise in a periodogram framework"" <LINK> <LINK> .@phillippro The freely available software ""Agatha"" can calculate useful moving periodograms to study time-invariance of periodic signals. <LINK> .@phillippro There is also an expression for a Bayesian version of the periodogram, worked out analytically by @phillippro. <LINK> .@phillippro The Agatha application is accessible online: <LINK>",https://arxiv.org/abs/1705.03089,"Periodograms are used as a key significance assessment and visualisation tool to display the significant periodicities in unevenly sampled time series. We introduce a framework of periodograms, called ""Agatha"", to disentangle periodic signals from correlated noise and to solve the 2-dimensional model selection problem: signal dimension and noise model dimension. These periodograms are calculated by applying likelihood maximization and marginalization and combined in a self-consistent way. We compare Agatha with other periodograms for the detection of Keplerian signals in synthetic radial velocity data produced for the Radial Velocity Challenge as well as in radial velocity datasets of several Sun-like stars. In our tests we find Agatha is able to recover signals to the adopted detection limit of the radial velocity challenge. Applied to real radial velocity, we use Agatha to confirm previous analysis of CoRoT-7 and to find two new planet candidates with minimum masses of 15.1 $M_\oplus$ and 7.08 $M_\oplus$ orbiting HD177565 and HD41248, with periods of 44.5 d and 13.4 d, respectively. We find that Agatha outperforms other periodograms in terms of removing correlated noise and assessing the significances of signals with more robust metrics. Moreover, it can be used to select the optimal noise model and to test the consistency of signals in time. Agatha is intended to be flexible enough to be applied to time series analyses in other astronomical and scientific disciplines. Agatha is available at this http URL ","Agatha: disentangling periodic signals from correlated noise in a |
periodogram framework",4,"['New paper of @phillippro: ""Agatha: disentangling periodic signals from correlated noise in a periodogram framework"" <LINK> <LINK>', '.@phillippro The freely available software ""Agatha"" can calculate useful moving periodograms to study time-invariance of periodic signals. https://t.co/4u7S0Elpqc', '.@phillippro There is also an expression for a Bayesian version of the periodogram, worked out analytically by @phillippro. https://t.co/Y5PXgGHluw', '.@phillippro The Agatha application is accessible online: https://t.co/voDDpeqFyw']",17,05,471 |
20,39,1376456668293296128,4249537197,Christian Wolf,New paper by @JannySteeven of our group: We forecast trajectories after observing an initial seq and show that any latent representation of size >= 2m+1 with linear and stable dynamics has a solution. Joint work with V. Andrieu + M. Nadri (LAGEPP). <LINK> <LINK>,https://arxiv.org/abs/2103.12443,"We address the problem of output prediction, ie. designing a model for autonomous nonlinear systems capable of forecasting their future observations. We first define a general framework bringing together the necessary properties for the development of such an output predictor. In particular, we look at this problem from two different viewpoints, control theory and data-driven techniques (machine learning), and try to formulate it in a consistent way, reducing the gap between the two fields. Building on this formulation and problem definition, we propose a predictor structure based on the Kazantzis-Kravaris/Luenberger (KKL) observer and we show that KKL fits well into our general framework. Finally, we propose a constructive solution for this predictor that solely relies on a small set of trajectories measured from the system. Our experiments show that our solution allows to obtain an efficient predictor over a subset of the observation space. ",Deep KKL: Data-driven Output Prediction for Non-Linear Systems,1,['New paper by @JannySteeven of our group: We forecast trajectories after observing an initial seq and show that any latent representation of size >= 2m+1 with linear and stable dynamics has a solution. Joint work with V. Andrieu + M. Nadri (LAGEPP).\n<LINK> <LINK>'],21,03,265 |
21,224,1408158879775485953,1124967058414825472,Sindhana Pannir-Sivajothi,My first paper in grad school 🤩 Thanks for being a wonderful advisor @ucsd_yuen! I learned a lot from you @JorgeACamGA and @LamQ310. Our first paper together @ShubhamSinha029 ❤️ We find that polariton condensation can change reaction yields and rates. <LINK> <LINK> @pleplostelous @ucsd_yuen @JorgeACamGA @LamQ310 @ShubhamSinha029 Thanks Avishek! :) @Sridevi292 @ucsd_yuen @JorgeACamGA @LamQ310 @ShubhamSinha029 Thanks Sridevi! :) @maity_indra_ @ucsd_yuen @JorgeACamGA @LamQ310 @ShubhamSinha029 Thanks Indrajit :D,https://arxiv.org/abs/2106.12156,"When molecular transitions strongly couple to photon modes, they form hybrid light-matter modes called polaritons. Collective vibrational strong coupling is a promising avenue for control of chemistry, but this can be deterred by the large number of quasi-degenerate dark modes. The macroscopic occupation of a single polariton mode by excitations, as observed in Bose-Einstein condensation, offers promise for overcoming this issue. Here we theoretically investigate the effect of vibrational polariton condensation on the kinetics of electron transfer processes. Compared with excitation with infrared laser sources, the condensate changes the reaction yield significantly due to additional channels with reduced activation barriers resulting from the large accumulation of energy in the lower polariton, and the many modes available for energy redistribution during the reaction. Our results offer tantalizing opportunities to use condensates for driving chemical reactions, kinetically bypassing usual constraints of fast intramolecular vibrational redistribution in condensed phase. ",Driving chemical reactions with polariton condensates,4,"['My first paper in grad school 🤩 Thanks for being a wonderful advisor @ucsd_yuen! I learned a lot from you @JorgeACamGA and @LamQ310.\n\nOur first paper together @ShubhamSinha029 ❤️\n\nWe find that polariton condensation can change reaction yields and rates.\n<LINK> <LINK>', '@pleplostelous @ucsd_yuen @JorgeACamGA @LamQ310 @ShubhamSinha029 Thanks Avishek! :)', '@Sridevi292 @ucsd_yuen @JorgeACamGA @LamQ310 @ShubhamSinha029 Thanks Sridevi! :)', '@maity_indra_ @ucsd_yuen @JorgeACamGA @LamQ310 @ShubhamSinha029 Thanks Indrajit :D']",21,06,513 |
22,15,1189536300338044929,4639078397,John Wise,"New paper day! Focusing on the growth of a massive black hole at z=7.5, we find that UV radiation from the SMBH and nuclear stellar cluster regulates runaway star formation within 30 pc, promoting early rapid growth of the SMBH. Led by Ji-hoon Kim (SNU) <LINK> <LINK> Our work also demonstrates the need to match the input physical models with the simulation mass/spatial resolution. Co-authors: @TomAbelStanford, Y. Jo (SNU), @JoelPrimack, @PFHopkins_Astro",https://arxiv.org/abs/1910.12888,"As computational resolution of modern cosmological simulations reach ever so close to resolving individual star-forming clumps in a galaxy, a need for ""resolution-appropriate"" physics for a galaxy-scale simulation has never been greater. To this end, we introduce a self-consistent numerical framework that includes explicit treatments of feedback from star-forming molecular clouds (SFMCs) and massive black holes (MBHs). In addition to the thermal supernovae feedback from SFMC particles, photoionizing radiation from both SFMCs and MBHs is tracked through full 3-dimensional ray tracing. A mechanical feedback channel from MBHs is also considered. Using our framework, we perform a state-of-the-art cosmological simulation of a quasar-host galaxy at z~7.5 for ~25 Myrs with all relevant galactic components such as dark matter, gas, SFMCs, and an embedded MBH seed of ~> 1e6 Ms. We find that feedback from SFMCs and an accreting MBH suppresses runaway star formation locally in the galactic core region. Newly included radiation feedback from SFMCs, combined with feedback from the MBH, helps the MBH grow faster by retaining gas that eventually accretes on to the MBH. Our experiment demonstrates that previously undiscussed types of interplay between gas, SFMCs, and a MBH may hold important clues about the growth and feedback of quasars and their host galaxies in the high-redshift Universe. ","High-redshift Galaxy Formation with Self-consistently Modeled Stars and |
Massive Black Holes: Stellar Feedback and Quasar Growth",2,"['New paper day! Focusing on the growth of a massive black hole at z=7.5, we find that UV radiation from the SMBH and nuclear stellar cluster regulates runaway star formation within 30 pc, promoting early rapid growth of the SMBH. Led by Ji-hoon Kim (SNU) <LINK> <LINK>', 'Our work also demonstrates the need to match the input physical models with the simulation mass/spatial resolution. Co-authors: @TomAbelStanford, Y. Jo (SNU), @JoelPrimack, @PFHopkins_Astro']",19,10,457 |
23,231,1374332704662167557,996420531334369281,Andres Karjus,"Preprint ""Conceptual similarity and communicative need shape colexification"" w/ @DrAlgernon @SimonKirby Tianyu Wang @kennysmithed: <LINK>. We carry out 4 artificial language experiments (incl a self-repl) to test 2 hypotheses from a crosslinguistic study 1/5 <LINK> 2/5 ....Xu et al 2020 (<LINK>) show using a sample of 250 langs that if two meanings get colexified (expressed w same form), it's usually similar ones, like snow&ice, demonstrating a constraint on the formation of lexicons. But they hypothesize that... 3/5 if some similar meanings are important for a culture to distinguish, e.g. brother&sister, then this communicative need prevents colexification (cf. also <LINK>) We test both claims using dyadic artificial language experiments and find support for both: 4/5 in the neutral condition, participants are more likely to colexify similar meanings than dissimilar ones; but when we manipulate communicative need by making them distinguish between similar meaning pairs more often, they change behaviour to mainatain efficient communication. <LINK> 5/5 Language change is driven by numerous interacting forces; our experimental results support the importance of speakers' communicative need, as a factor modulating the complexity-informativeness tradeoff proposed in previous literature. <LINK> Oh and of course all the data from the 4 experiments and the analysis code are all available: <LINK> - incl the source code for the #Shiny game app I developed for the study (turns out you totally can make multiplayer games in #rstats ...if you're mad enough) <LINK>",https://arxiv.org/abs/2103.11024,"Colexification refers to the phenomenon of multiple meanings sharing one word in a language. Cross-linguistic lexification patterns have been shown to be largely predictable, as similar concepts are often colexified. We test a recent claim that, beyond this general tendency, communicative needs play an important role in shaping colexification patterns. We approach this question by means of a series of human experiments, using an artificial language communication game paradigm. Our results across four experiments match the previous cross-linguistic findings: all other things being equal, speakers do prefer to colexify similar concepts. However, we also find evidence supporting the communicative need hypothesis: when faced with a frequent need to distinguish similar pairs of meanings, speakers adjust their colexification preferences to maintain communicative efficiency, and avoid colexifying those similar meanings which need to be distinguished in communication. This research provides further evidence to support the argument that languages are shaped by the needs and preferences of their speakers. ","Conceptual similarity and communicative need shape colexification: an |
experimental study",6,"['Preprint ""Conceptual similarity and communicative need shape colexification"" w/ @DrAlgernon @SimonKirby Tianyu Wang @kennysmithed: <LINK>. We carry out 4 artificial language experiments (incl a self-repl) to test 2 hypotheses from a crosslinguistic study 1/5 <LINK>', ""2/5 ....Xu et al 2020 (https://t.co/DkeS91E0Zv) show using a sample of 250 langs that if two meanings get colexified (expressed w same form), it's usually similar ones, like snow&ice, demonstrating a constraint on the formation of lexicons. But they hypothesize that..."", '3/5 if some similar meanings are important for a culture to distinguish, e.g. brother&sister, then this communicative need prevents colexification (cf. also https://t.co/3scUpI5OjP)\nWe test both claims using dyadic artificial language experiments and find support for both:', '4/5 in the neutral condition, participants are more likely to colexify similar meanings than dissimilar ones; but when we manipulate communicative need by making them distinguish between similar meaning pairs more often, they change behaviour to mainatain efficient communication. https://t.co/nXtU8kKBZN', ""5/5 Language change is driven by numerous interacting forces; our experimental results support the importance of speakers' communicative need, as a factor modulating the complexity-informativeness tradeoff proposed in previous literature. https://t.co/fPYBRCfE5e"", ""Oh and of course all the data from the 4 experiments and the analysis code are all available: https://t.co/Jnkx03Amqx - incl the source code for the #Shiny game app I developed for the study (turns out you totally can make multiplayer games in #rstats ...if you're mad enough) https://t.co/0pWOr6Cojy""]",21,03,1576 |
24,63,1329360036766904321,720027280051957761,Oliver Newton,"New paper is out! It feels good to finally get this one out the door. Here, we build on previous work to constrain the properties of warm dark matter models using the satellite galaxies of the MW. We also end up with a lower bound on the MW halo mass <LINK> 1/7 In the first part, we compare the predicted abundance of DM subhaloes in MW haloes with the pop. of MW satellites. WDM models that can't produce enough substructure to host the MW dwarf galaxies are ruled out. For thermal relic DM, particle masses below 2 keV aren't viable. 2/7 <LINK> This is a conservative robust lower limit on the thermal relic particle mass that is independent of assumptions about galaxy formation processes. We can improve the result by modelling these. In particular, reionization is important as it suppresses dwarf galaxy formation. 3/7 We use Galform (@DarkerMatters ) to model these processes and explore several reionization prescriptions. In our fiducial scenario, we rule out thermal relic DM with mass below 4 keV. We also find that even the coldest models can't be ruled out if the MW mass is too low. 4/7 <LINK> This is true when the WDM is very cold and differs little from CDM, so we interpret this as a lower bound on the MW halo mass in CDM. Reionization that finishes earlier and models preventing subsequent gas cooling into the smallest galaxies produce the strongest constraints. 5/7 We also discuss a few technical challenges with analyses of this type and compare with other constraints in the literature from different techniques. We are applying this approach to other WDM models so stay tuned for those results! 6/7 Of course, thanks must go to my co-authors and collaborators whose sage advice and insight brought this work to a good conclusion! 7/7",https://arxiv.org/abs/2011.08865,"The satellite galaxies of the Milky Way (MW) are effective probes of the underlying dark matter (DM) substructure, which is sensitive to the nature of the DM particle. In particular, a class of DM models have a power spectrum cut-off on the mass scale of dwarf galaxies and thus predict only small numbers of substructures below the cut-off mass. This makes the MW satellite system appealing to constrain the DM properties: feasible models must produce enough substructure to host the number of observed Galactic satellites. Here, we compare theoretical predictions of the abundance of DM substructure in thermal relic warm DM (WDM) models with estimates of the total satellite population of the MW. This produces conservative robust lower limits on the allowed mass, $m_\mathrm{th}$, of the thermal relic WDM particle. As the abundance of satellite galaxies depends on the MW halo mass, we marginalize over the corresponding uncertainties and rule out $m_\mathrm{th} \leq 2.02\, \mathrm{keV}$ at 95 per cent confidence independently of assumptions about galaxy formation processes. Modelling some of these - in particular, the effect of reionization, which suppresses the formation of dwarf galaxies - strengthens our constraints on the DM properties and excludes models with $m_\mathrm{th} \leq 3.99\, \mathrm{keV}$ in our fiducial model. We also find that thermal relic models cannot produce enough satellites if the MW halo mass is $M_{200}\leq 0.6\times 10^{12}\, \mathrm{M_\odot}$, which imposes a lower limit on the MW halo mass in CDM. We address several observational and theoretical uncertainties and discuss how improvements in these will strengthen the DM mass constraints. ","Constraints on the properties of warm dark matter using the satellite |
galaxies of the Milky Way",7,"['New paper is out! It feels good to finally get this one out the door.\n\nHere, we build on previous work to constrain the properties of warm dark matter models using the satellite galaxies of the MW. We also end up with a lower bound on the MW halo mass\n\n<LINK>\n1/7', ""In the first part, we compare the predicted abundance of DM subhaloes in MW haloes with the pop. of MW satellites. WDM models that can't produce enough substructure to host the MW dwarf galaxies are ruled out.\n\nFor thermal relic DM, particle masses below 2 keV aren't viable.\n2/7 https://t.co/l4VSXoxqkh"", 'This is a conservative robust lower limit on the thermal relic particle mass that is independent of assumptions about galaxy formation processes.\n\nWe can improve the result by modelling these. In particular, reionization is important as it suppresses dwarf galaxy formation.\n3/7', ""We use Galform (@DarkerMatters ) to model these processes and explore several reionization prescriptions.\n\nIn our fiducial scenario, we rule out thermal relic DM with mass below 4 keV. We also find that even the coldest models can't be ruled out if the MW mass is too low.\n4/7 https://t.co/kDQhrXdSrK"", 'This is true when the WDM is very cold and differs little from CDM, so we interpret this as a lower bound on the MW halo mass in CDM.\n\nReionization that finishes earlier and models preventing subsequent gas cooling into the smallest galaxies produce the strongest constraints.\n5/7', 'We also discuss a few technical challenges with analyses of this type and compare with other constraints in the literature from different techniques.\n\nWe are applying this approach to other WDM models so stay tuned for those results!\n\n6/7', 'Of course, thanks must go to my co-authors and collaborators whose sage advice and insight brought this work to a good conclusion!\n\n7/7']",20,11,1760 |
25,128,1455808495224229891,21902101,Jim Geach,"Check out our new paper on arXiv today ""Realistic galaxy image simulation via score-based generative models"" <LINK> GANs get a lot of glory, but score-based generative methods can produce superior results compared to GANs. Here we use a denoising diffusion probabilistic model to generate realistic images of galaxies. Half of the galaxies in this image are not real <LINK> We also show that the emergent properties, such as flux, colour, size, are re-produced by the DDPM. The DDPM is trained by slowly adding noise to an image via a Markov Chain, and then the model tries to reverse this process. <LINK> You can use the model to generate realistic images, but it also has other applications, like in-filling missing data (e.g. satellite trails 😬) <LINK> And performing domain transfer - here we ""DESI-fy"" some cartoon sketches. You could use this for image or sketch-based searches. <LINK> Finally, as an aside, we trained a DDPM on the entire @apod archive to generate fake APODs. The images contain slightly surreal landscapes, galaxies, aurorae, etc. You can follow @ThisIsNotAnApod and see more at <LINK> <LINK> This work was led by Mike Smith, a PhD student at @UniofHerts",https://arxiv.org/abs/2111.01713,"We show that a Denoising Diffusion Probabalistic Model (DDPM), a class of score-based generative model, can be used to produce realistic mock images that mimic observations of galaxies. Our method is tested with Dark Energy Spectroscopic Instrument (DESI) grz imaging of galaxies from the Photometry and Rotation curve OBservations from Extragalactic Surveys (PROBES) sample and galaxies selected from the Sloan Digital Sky Survey. Subjectively, the generated galaxies are highly realistic when compared with samples from the real dataset. We quantify the similarity by borrowing from the deep generative learning literature, using the `Fr\'echet Inception Distance' to test for subjective and morphological similarity. We also introduce the `Synthetic Galaxy Distance' metric to compare the emergent physical properties (such as total magnitude, colour and half light radius) of a ground truth parent and synthesised child dataset. We argue that the DDPM approach produces sharper and more realistic images than other generative methods such as Adversarial Networks (with the downside of more costly inference), and could be used to produce large samples of synthetic observations tailored to a specific imaging survey. We demonstrate two potential uses of the DDPM: (1) accurate in-painting of occluded data, such as satellite trails, and (2) domain transfer, where new input images can be processed to mimic the properties of the DDPM training set. Here we `DESI-fy' cartoon images as a proof of concept for domain transfer. Finally, we suggest potential applications for score-based approaches that could motivate further research on this topic within the astronomical community. ",Realistic galaxy image simulation via score-based generative models,8,"['Check out our new paper on arXiv today ""Realistic galaxy image simulation via score-based generative models""\n\n<LINK>', 'GANs get a lot of glory, but score-based generative methods can produce superior results compared to GANs. Here we use a denoising diffusion probabilistic model to generate realistic images of galaxies.', 'Half of the galaxies in this image are not real https://t.co/6ZpzfglxDX', 'We also show that the emergent properties, such as flux, colour, size, are re-produced by the DDPM. The DDPM is trained by slowly adding noise to an image via a Markov Chain, and then the model tries to reverse this process. https://t.co/dAj5w3m7iz', 'You can use the model to generate realistic images, but it also has other applications, like in-filling missing data (e.g. satellite trails 😬) https://t.co/x61Xu2gDqq', 'And performing domain transfer - here we ""DESI-fy"" some cartoon sketches. You could use this for image or sketch-based searches. https://t.co/7e0mi33G4R', 'Finally, as an aside, we trained a DDPM on the entire @apod archive to generate fake APODs. The images contain slightly surreal landscapes, galaxies, aurorae, etc. You can follow @ThisIsNotAnApod and see more at https://t.co/MExNCLtPwI https://t.co/3SeDSaQwSm', 'This work was led by Mike Smith, a PhD student at @UniofHerts']",21,11,1179 |
26,7,1488421960824537089,948528424926220288,Johannes Røsok Eskilt,"Excited that my new paper is out! <LINK> I'm extremely grateful for the help I've received from so many people! In the paper, I look at the frequency dependence of the cosmic birefringence signal found in Planck data, and I include the LFI for the first time I sample the birefringence angle individually for each frequency channel, and I sample a power-law model. I get that the signal is very consistent with being frequency-independent, which rules out Faraday rotation as the cause of the signal <LINK> But more work needs to be done to understand the physics of the polarized foreground emission, both dust and synchrotron, before we can know the statistical significance of the measurements",https://arxiv.org/abs/2201.13347,"We present new constraints on the frequency dependence of the cosmic birefringence angle from the Planck data release 4 polarization maps. An axion field coupled to electromagnetism predicts a nearly frequency-independent birefringence angle, $\beta_\nu = \beta$, while Faraday rotation from local magnetic fields and Lorentz violating theories predict a cosmic birefringence angle that is proportional to the frequency, $\nu$, to the power of some integer $n$, $\beta_\nu \propto \nu^n$. In this work, we first sample $\beta_\nu$ individually for each polarized HFI frequency band in addition to the 70 GHz channel from the LFI. We also constrain a power-law formula for the birefringence angle, $\beta_\nu=\beta_0(\nu/\nu_0)^n$, with $\nu_0 = 150$ GHz. For a nearly full-sky measurement, $f_{\text{sky}}=0.93$, we find $\beta_0 = 0.26^{\circ}\pm0.11^\circ$ $(68\% \text{ C.L.})$ and $n=-0.45^{+0.61}_{-0.82}$ when we ignore the intrinsic $EB$ correlations of the polarized foreground emission, and $\beta_0 = 0.33^\circ \pm 0.12^\circ$ and $n=-0.37^{+0.49}_{-0.64}$ when we use a filamentary dust model for the foreground $EB$. Next, we use all the polarized Planck maps, including the 30 and 44 GHz frequency bands. These bands have a negligible foreground contribution from polarized dust emission. We, therefore, treat them separately. Without any modeling of the intrinsic $EB$ of the foreground, we generally find that the inclusion of the 30 and 44 GHz frequency bands raises the measured values of $\beta_\nu$ and tightens $n$. At nearly full-sky, we measure $\beta_0=0.29^{\circ+0.10^\circ}_{\phantom{\circ}-0.11^\circ}$ and $n=-0.35^{+0.48}_{-0.47}$. Assuming no frequency dependence, we measure $\beta=0.33^\circ \pm 0.10^\circ$. If our measurements have effectively mitigated the $EB$ of the foreground, our constraints are consistent with a mostly frequency-independent signal of cosmic birefringence. ","Frequency-Dependent Constraints on Cosmic Birefringence from the LFI and |
HFI Planck Data Release 4",3,"[""Excited that my new paper is out! <LINK> I'm extremely grateful for the help I've received from so many people! In the paper, I look at the frequency dependence of the cosmic birefringence signal found in Planck data, and I include the LFI for the first time"", 'I sample the birefringence angle individually for each frequency channel, and I sample a power-law model. I get that the signal is very consistent with being frequency-independent, which rules out Faraday rotation as the cause of the signal https://t.co/ZvA485jHfY', 'But more work needs to be done to understand the physics of the polarized foreground emission, both dust and synchrotron, before we can know the statistical significance of the measurements']",22,01,696 |
27,49,1263152573126672385,119837224,Jason Baldridge,"Happy to share a new ACL paper: Li, He, Zhou, Zhang and Baldridge ""Mapping Natural Language Instructions to Mobile UI Action Sequences"". We build datasets and models for learning to take UI actions based on NL instructions. <LINK> <LINK> We evaluate the end-to-end task with a new dataset called PixelHelp, which we collected from annotators using a Pixel Phone emulator. PixelHelp is small with 187 instructions, but it has fine-grained grounding of instructions to actions (of two to eight steps). <LINK> We decompose the problem into phrase-tuple extraction followed by action prediction conditioned on the phrase-tuples and the screen. The first stage uses a transformer to encode the instruction and produce a tuple sequence. We train this on annotated HowTo instructions. <LINK> The second stage grounds the phrase tuples using a transformer that encodes objects using both their content (metadata) and positional information (spatial and structural). We train this on synthetic command-actions on the Rico public UI corpus. <LINK> We compare the full model with a heuristic baseline that matches phrases to object names and to object encoders using Graph Convolutional Networks. The transformer approach proves much better at both partial instruction grounding and complete grounding. <LINK> To be fair, we probably could have eked out a bit better performance if we had used the Ours model, but we realized that a bit too late. <LINK> We hope the data, and the decomposition of the problem make it possible for others to make progress on the task! I'm keen to see if techniques and ideas from problems like Vision-and-Language Navigation (e.g. <LINK>) can used for this too, especially RL.",https://arxiv.org/abs/2005.03776,"We present a new problem: grounding natural language instructions to mobile user interface actions, and create three new datasets for it. For full task evaluation, we create PIXELHELP, a corpus that pairs English instructions with actions performed by people on a mobile UI emulator. To scale training, we decouple the language and action data by (a) annotating action phrase spans in HowTo instructions and (b) synthesizing grounded descriptions of actions for mobile user interfaces. We use a Transformer to extract action phrase tuples from long-range natural language instructions. A grounding Transformer then contextually represents UI objects using both their content and screen position and connects them to object descriptions. Given a starting screen and instruction, our model achieves 70.59% accuracy on predicting complete ground-truth action sequences in PIXELHELP. ",Mapping Natural Language Instructions to Mobile UI Action Sequences,7,"['Happy to share a new ACL paper: Li, He, Zhou, Zhang and Baldridge ""Mapping Natural Language Instructions to Mobile UI Action Sequences"". We build datasets and models for learning to take UI actions based on NL instructions.\n\n<LINK> <LINK>', 'We evaluate the end-to-end task with a new dataset called PixelHelp, which we collected from annotators using a Pixel Phone emulator. PixelHelp is small with 187 instructions, but it has fine-grained grounding of instructions to actions (of two to eight steps). https://t.co/zXYNbGlcQ6', 'We decompose the problem into phrase-tuple extraction followed by action prediction conditioned on the phrase-tuples and the screen. The first stage uses a transformer to encode the instruction and produce a tuple sequence. We train this on annotated HowTo instructions. https://t.co/YdJB3IUSQP', 'The second stage grounds the phrase tuples using a transformer that encodes objects using both their content (metadata) and positional information (spatial and structural). We train this on synthetic command-actions on the Rico public UI corpus. https://t.co/ldtMfyrCub', 'We compare the full model with a heuristic baseline that matches phrases to object names and to object encoders using Graph Convolutional Networks. The transformer approach proves much better at both partial instruction grounding and complete grounding. https://t.co/LKgiYQloVJ', 'To be fair, we probably could have eked out a bit better performance if we had used the Ours model, but we realized that a bit too late.\n\nhttps://t.co/ZE8nivmykT', ""We hope the data, and the decomposition of the problem make it possible for others to make progress on the task! I'm keen to see if techniques and ideas from problems like Vision-and-Language Navigation (e.g. https://t.co/56JXuWLPz5) can used for this too, especially RL.""]",20,05,1697 |
28,214,1381898128618573829,1199686509328289793,Jean-Baptiste Cordonnier,"I am happy to present our work at #CVPR2021 from my internship with Google Brain Berlin. We study a CNN inefficiency: every part of the image is processed by same network at same resolution. Instead we extract only ""relevant"" patches at full res. <LINK> <LINK> Our patch selection layer is a standard differentiable layer: the extracted patches can be processed by *any* downstream networks or aggregation scheme (like transformers). Code: <LINK> <LINK> Our top-k patch extraction is end-to-end differentiable thanks to the great work on Differentiable Perturbed Optimizers by @qberthet et al. Their clean framework makes discrete problems (expressible as a LP) differentiable and opens so many opportunities! <LINK> <LINK> I was lucky to collaborate with such a great team :) I learned a lot from their expertise on attention, transformers, vision 🍻",https://arxiv.org/abs/2104.03059,"Neural Networks require large amounts of memory and compute to process high resolution images, even when only a small part of the image is actually informative for the task at hand. We propose a method based on a differentiable Top-K operator to select the most relevant parts of the input to efficiently process high resolution images. Our method may be interfaced with any downstream neural network, is able to aggregate information from different patches in a flexible way, and allows the whole model to be trained end-to-end using backpropagation. We show results for traffic sign recognition, inter-patch relationship reasoning, and fine-grained recognition without using object/part bounding box annotations during training. ",Differentiable Patch Selection for Image Recognition,4,"['I am happy to present our work at #CVPR2021 from my internship with Google Brain Berlin. \n\nWe study a CNN inefficiency: every part of the image is processed by same network at same resolution. Instead we extract only ""relevant"" patches at full res.\n\n<LINK> <LINK>', 'Our patch selection layer is a standard differentiable layer: the extracted patches can be processed by *any* downstream networks or aggregation scheme (like transformers).\n\nCode: https://t.co/GqzSAp6u37 https://t.co/ytWEYI4ShV', 'Our top-k patch extraction is end-to-end differentiable thanks to the great work on\xa0Differentiable Perturbed Optimizers by @qberthet et al. Their clean framework makes discrete problems (expressible as a LP) differentiable and opens so many opportunities!\n\nhttps://t.co/txH1CqJzM6 https://t.co/J9AShUIC5M', 'I was lucky to collaborate with such a great team :) I learned a lot from their expertise on attention, transformers, vision 🍻']",21,04,851 |
29,9,1077234804578533376,554869994,Yonatan Belinkov,"Interested in understanding neural networks for #NLProc ? Looking for reading material for your winter break? Check out our new paper, ""Analysis Methods in Neural Language Processing: A Survey"", to appear in TACL. Preprint: <LINK> Website: <LINK> In the paper, we: - Review various methods for analyzing neural networks for NLP - Categorize them into several trends (more on this below) - Discuss limitations in current work - And point out directions for future work #1 What linguistic information is captured in neural networks: - Methods for finding language in #deeplearning - Categorization of linguistic phenomena and network components - Limitations of the common ""auxiliary prediction tasks"" aka ""diagnostic classifiers"" aka ""probing tasks"" #2 Visualization as an analysis tool, including heat maps of RNN activations, attention alignments, saliency measures from #computervision, and online visualization tools. We also call for more evaluations of such methods. #3 Challenge sets, aka test, suites as a paradigm for fine-grained evaluation, with inspiration from older #NLProc work. - Different tasks (#neuralempty, NLI -- we need more!) - Linguistic properties - Variety of languages (not enough!) and scale - Construction methods #4 Adversarial examples as a way to detect weaknesses of neural networks. We discuss the difficulty of discrete text input, and: - Black-box vs white-box attacks - Targeted vs non-targeted - Linguistic unit (characters, words, sentences) - The attacked task - Issues of coherence #5 Explaining predictions by generating explanations or finding input-output associations, and call for more work on this important topic. #6 And some other analysis work like erasure, behavioral experiments, and learning formal languages. For all the refs we've missed, please contribute to the website: <LINK> Finally, if you need inspiration, see our conclusion for some of the gaps we identified in the literature. And then submit to the #BlackboxNLP #acl2019 workshop at <LINK>",https://arxiv.org/abs/1812.08951,"The field of natural language processing has seen impressive progress in recent years, with neural network models replacing many of the traditional systems. A plethora of new models have been proposed, many of which are thought to be opaque compared to their feature-rich counterparts. This has led researchers to analyze, interpret, and evaluate neural networks in novel and more fine-grained ways. In this survey paper, we review analysis methods in neural language processing, categorize them according to prominent research trends, highlight existing limitations, and point to potential directions for future work. ",Analysis Methods in Neural Language Processing: A Survey,8,"['Interested in understanding neural networks for #NLProc ? Looking for reading material for your winter break? Check out our new paper, ""Analysis Methods in Neural Language Processing: A Survey"", to appear in TACL.\nPreprint: <LINK>\nWebsite: <LINK>', 'In the paper, we:\n- Review various methods for analyzing neural networks for NLP\n- Categorize them into several trends (more on this below)\n- Discuss limitations in current work\n- And point out directions for future work', '#1 What linguistic information is captured in neural networks:\n- Methods for finding language in #deeplearning\n- Categorization of linguistic phenomena and network components\n- Limitations of the common ""auxiliary prediction tasks"" aka ""diagnostic classifiers"" aka ""probing tasks""', '#2 Visualization as an analysis tool, including heat maps of RNN activations, attention alignments, saliency measures from #computervision, and online visualization tools. We also call for more evaluations of such methods.', '#3 Challenge sets, aka test, suites as a paradigm for fine-grained evaluation, with inspiration from older #NLProc work. \n- Different tasks (#neuralempty, NLI -- we need more!)\n- Linguistic properties\n- Variety of languages (not enough!) and scale\n- Construction methods', '#4 Adversarial examples as a way to detect weaknesses of neural networks. We discuss the difficulty of discrete text input, and:\n- Black-box vs white-box attacks\n- Targeted vs non-targeted\n- Linguistic unit (characters, words, sentences) \n- The attacked task\n- Issues of coherence', '#5 Explaining predictions by generating explanations or finding input-output associations, and call for more work on this important topic.\n\n#6 And some other analysis work like erasure, behavioral experiments, and learning formal languages.', ""For all the refs we've missed, please contribute to the website: https://t.co/4copevvux6\n\nFinally, if you need inspiration, see our conclusion for some of the gaps we identified in the literature.\n\nAnd then submit to the #BlackboxNLP #acl2019 workshop at https://t.co/aXMdnc5cqZ""]",18,12,2004 |
30,24,1520435076194082816,947751551258583040,Yoav Levine,"1/2 New paper: ""Sub-Task Decomposition Enables Learning in Seq2seq Tasks"". We prove that unlearnable composite problems can be learned if the network is trained to first predict intermediate steps towards the solution, and only then the solution itself <LINK> 2/2 Our paper theoretically motivates trending approaches such as chain of thought prompting, scratchpads, self thought reasoners (and more), which tackle compounded problems in natural language via introducing intermediate supervision in a seq2seq manner.@noamwies @AmnonShashua",https://arxiv.org/abs/2204.02892,"The field of Natural Language Processing (NLP) has experienced a dramatic leap in capabilities with the recent introduction of huge Language Models (LMs). Despite this success, natural language problems that involve several compounded steps are still practically unlearnable, even by the largest LMs. This complies with experimental failures for end-to-end learning of composite problems that were demonstrated in a variety of domains. A known mitigation is to introduce intermediate supervision for solving sub-tasks of the compounded problem. Recently, several works have demonstrated high gains by taking a straightforward approach for incorporating intermediate supervision in compounded natural language problems: the sequence-to-sequence LM is fed with an augmented input, in which the decomposed tasks' labels are simply concatenated to the original input. In this paper, we prove a positive learning result that motivates these recent efforts. We show that when concatenating intermediate supervision to the input and training a sequence-to-sequence model on this modified input, an unlearnable composite problem becomes learnable. We prove this for the notoriously unlearnable composite task of bit-subset parity, with the intermediate supervision being parity results of increasingly large bit-subsets. Beyond motivating contemporary empirical efforts for incorporating intermediate supervision in sequence-to-sequence language models, our positive theoretical result is the first of its kind in the landscape of results on the benefits of intermediate supervision: Until now, all theoretical results on the subject are negative, i.e., show cases where learning is impossible without intermediate supervision, while our result is positive, showing a case where learning is facilitated in the presence of intermediate supervision. ",Sub-Task Decomposition Enables Learning in Sequence to Sequence Tasks,2,"['1/2 \nNew paper: ""Sub-Task Decomposition Enables Learning in Seq2seq Tasks"". We prove that unlearnable composite problems can be learned if the network is trained to first predict intermediate steps towards the solution, and only then the solution itself <LINK>', '2/2\nOur paper theoretically motivates trending approaches such as chain of thought prompting, scratchpads, self thought reasoners (and more), which tackle compounded problems in natural language via introducing intermediate supervision in a seq2seq manner.@noamwies @AmnonShashua']",22,04,539 |
31,40,1064862803679432704,5850692,Aaron Roth,"Excited about a new paper with @datasciwell and @zstevenwu : ""How to Use Heuristics For Differential Privacy"", here: <LINK> We give oracle efficient algorithms for private learning and synthetic data generation, and leave a bunch of interesting open problems.",https://arxiv.org/abs/1811.07765,"We develop theory for using heuristics to solve computationally hard problems in differential privacy. Heuristic approaches have enjoyed tremendous success in machine learning, for which performance can be empirically evaluated. However, privacy guarantees cannot be evaluated empirically, and must be proven --- without making heuristic assumptions. We show that learning problems over broad classes of functions can be solved privately and efficiently, assuming the existence of a non-private oracle for solving the same problem. Our first algorithm yields a privacy guarantee that is contingent on the correctness of the oracle. We then give a reduction which applies to a class of heuristics which we call certifiable, which allows us to convert oracle-dependent privacy guarantees to worst-case privacy guarantee that hold even when the heuristic standing in for the oracle might fail in adversarial ways. Finally, we consider a broad class of functions that includes most classes of simple boolean functions studied in the PAC learning literature, including conjunctions, disjunctions, parities, and discrete halfspaces. We show that there is an efficient algorithm for privately constructing synthetic data for any such class, given a non-private learning oracle. This in particular gives the first oracle-efficient algorithm for privately generating synthetic data for contingency tables. The most intriguing question left open by our work is whether or not every problem that can be solved differentially privately can be privately solved with an oracle-efficient algorithm. While we do not resolve this, we give a barrier result that suggests that any generic oracle-efficient reduction must fall outside of a natural class of algorithms (which includes the algorithms given in this paper). ",How to Use Heuristics for Differential Privacy,1,"['Excited about a new paper with @datasciwell and @zstevenwu : ""How to Use Heuristics For Differential Privacy"", here: <LINK> We give oracle efficient algorithms for private learning and synthetic data generation, and leave a bunch of interesting open problems.']",18,11,259 |
32,119,1456167325732904968,1107605372905377792,Aku Venhola🇺🇦,"New paper in ArXiv today about low surface brightness dwarfs in the Fornax cluster: <LINK> In this paper, we applied MTO to find new galaxies from the FDS data. This way we were able to extend our previous FDS dwarf catalog to 821 dwarfs. Below some key findings: - Massive dwarfs (log(M*)>7) show correlation between axis-ratio and surface brightness, which can be corrected by inclination correction -> this is not the case for log(M*)<7 dwarfs. Interpretation: disks vs. not disks? -Surface brightness of dwarfs become fainter toward the cluster center. This is partly due to the ageing of their stellar populations but we do find evidence for tidal interactions as well: more disturbed morphology, lower axis-ratios and lower surface density toward the center. -We found the low-luminosity end slope of the galaxy luminosity function to be -1.4, which is consistent with findings in other environments studied with similar data. However, the match is not that great with simulations (resolution issues?). Conclusion: the Fornax galaxy cluster makes dwarfs fainter by stopping their star formation and by lowering their density via tidal forces.",https://arxiv.org/abs/2111.01855,"In this work we use Max-Tree Objects, (MTO) on the FDS data in order to detect previously undetected Low surface brightness (LSB) galaxies. After extending the existing Fornax dwarf galaxy catalogs with this sample, our goal is to understand the evolution of LSB dwarfs in the cluster. We also study the contribution of the newly detected galaxies to the faint end of the luminosity function. We test the detection completeness and parameter extraction accuracy of MTO. We then apply MTO to the FDS images to identify LSB candidates. The identified objects are fitted with 2D S\'ersic models using GALFIT and classified based on their morphological appearance, colors, and structure. With MTO, we are able to increase the completeness of our earlier FDS dwarf catalog (FDSDC) 0.5-1 mag deeper in terms of total magnitude and surface brightness. Due to the increased accuracy in measuring sizes of the detected objects, we also add many small galaxies to the catalog that were previously excluded as their outer parts had been missed in detection. We detect 265 new LSB dwarf galaxies in the Fornax cluster, which increases the total number of known dwarfs in Fornax to 821. Using the extended catalog, we show that the luminosity function has a faint-end slope of -1.38+/-0.02. We compare the obtained luminosity function with different environments studied earlier using deep data but do not find any significant differences. On the other hand, the Fornax-like simulated clusters in the IllustrisTNG cosmological simulation have shallower slopes than found in the observational data. We also find several trends in the galaxy colors, structure, and morphology that support the idea that the number of LSB galaxies is higher in the cluster center due to tidal forces and the age dimming of the stellar populations. The same result also holds for the subgroup of large LSB galaxies, so-called ultra-diffuse galaxies. ","The Fornax Deep Survey (FDS) with VST XII: Low surface brightness dwarf |
galaxies in the Fornax cluster",5,"['New paper in ArXiv today about low surface brightness dwarfs in the Fornax cluster:\n<LINK>\nIn this paper, we applied MTO to find new galaxies from the FDS data. This way we were able to extend our previous FDS dwarf catalog to 821 dwarfs. Below some key findings:', '- Massive dwarfs (log(M*)>7) show correlation between axis-ratio and surface brightness, which can be corrected by inclination correction -> this is not the case for log(M*)<7 dwarfs. Interpretation: disks vs. not disks?', '-Surface brightness of dwarfs become fainter toward the cluster center. This is partly due to the ageing of their stellar populations but we do find evidence for tidal interactions as well: more disturbed morphology, lower axis-ratios and lower surface density toward the center.', '-We found the low-luminosity end slope of the galaxy luminosity function to be -1.4, which is consistent with findings in other environments studied with similar data. However, the match is not that great with simulations (resolution issues?).', 'Conclusion: the Fornax galaxy cluster makes dwarfs fainter by stopping their star formation and by lowering their density via tidal forces.']",21,11,1157 |
33,167,1279017657178763270,893109085,Edouard Grave,"New work w/ @gizacard (Gautier Izacard): how much do generative models for open domain QA benefit from retrieval? A lot! Retrieving 100 passages, we get 51.4 EM on NaturalQuestions, 67.6 EM on TriviaQA. 1/3 Paper: <LINK> <LINK> Our main finding: generative models are great at combining information from multiple passages, as their performance keeps improving as the number of support documents increases. 2/3 <LINK> By processing passages independently in the encoder, but jointly in the decoder, our models scale to large numbers of passages, and can combine information from these multiple passages. 3/3",https://arxiv.org/abs/2007.01282,"Generative models for open domain question answering have proven to be competitive, without resorting to external knowledge. While promising, this approach requires to use models with billions of parameters, which are expensive to train and query. In this paper, we investigate how much these models can benefit from retrieving text passages, potentially containing evidence. We obtain state-of-the-art results on the Natural Questions and TriviaQA open benchmarks. Interestingly, we observe that the performance of this method significantly improves when increasing the number of retrieved passages. This is evidence that generative models are good at aggregating and combining evidence from multiple passages. ","Leveraging Passage Retrieval with Generative Models for Open Domain |
Question Answering",3,"['New work w/ @gizacard (Gautier Izacard): how much do generative models for open domain QA benefit from retrieval? A lot! Retrieving 100 passages, we get 51.4 EM on NaturalQuestions, 67.6 EM on TriviaQA. 1/3\nPaper: <LINK> <LINK>', 'Our main finding: generative models are great at combining information from multiple passages, as their performance keeps improving as the number of support documents increases. 2/3 https://t.co/f1iQ6glPez', 'By processing passages independently in the encoder, but jointly in the decoder, our models scale to large numbers of passages, and can combine information from these multiple passages. 3/3']",20,07,606 |
34,49,1187007570432679936,876274407995527169,David Madras,"New paper on ArXiv! “Detecting Extrapolation with Local Ensembles” (w/ @alexdamour and @james_c_atwood)! <LINK> TL;DR: We present a post-hoc method for approximating the variance of an ensemble, which is useful for estimating prediction reliability. Ensemble training (many models, same architecture & training set) is a good way to estimate predictive reliability. If many models in ensemble disagree, prediction is probably unreliable. However, sometimes you can’t train an ensemble (pre-trained models, resource constraints). <LINK> We present Local Ensembles, a method for estimating prediction reliability in a *pre-trained* model, which approximates the variance of an ensemble (from that model class and training set), using only local second-order information. Intuition: some directions around model parameters are flat (small Hessian eigenvalues), so perturbing in those directions won't raise training loss. We call this the ensemble subspace. <LINK> The models in the ensemble subspace are all similarly good with respect to the training loss; if their predictions at an input disagree, the model class is underdetermined by the training data (and loss function) at that input, and the prediction is probably unreliable. For a test input, we can approximate the predictive variance of the models lying locally in the ensemble subspace by projecting the prediction gradient into it. We show empirically the norm of this projection (our extrapolation score) approximates true ensemble variance well. <LINK> See the paper for experimental results. For instance, active learning: we show that our method provides useful signal for selecting the next points to train on (higher scores mean more extrapolation means receiving a label would more useful) <LINK> Lots more technical details in the paper: how do we find the ensemble subspace (the Lanczos iteration and some tricks), relationships to other second-order methods (e.g. influence functions, Laplace approximation) and more!",https://arxiv.org/abs/1910.09573,"We present local ensembles, a method for detecting underspecification -- when many possible predictors are consistent with the training data and model class -- at test time in a pre-trained model. Our method uses local second-order information to approximate the variance of predictions across an ensemble of models from the same class. We compute this approximation by estimating the norm of the component of a test point's gradient that aligns with the low-curvature directions of the Hessian, and provide a tractable method for estimating this quantity. Experimentally, we show that our method is capable of detecting when a pre-trained model is underspecified on test data, with applications to out-of-distribution detection, detecting spurious correlates, and active learning. ",Detecting Underspecification with Local Ensembles,8,"['New paper on ArXiv! “Detecting Extrapolation with Local Ensembles” (w/ @alexdamour and @james_c_atwood)! <LINK>\n\nTL;DR: We present a post-hoc method for approximating the variance of an ensemble, which is useful for estimating prediction reliability.', 'Ensemble training (many models, same architecture & training set) is a good way to estimate predictive reliability. If many models in ensemble disagree, prediction is probably unreliable. However, sometimes you can’t train an ensemble (pre-trained models, resource constraints). https://t.co/iGtPXXV2Jd', 'We present Local Ensembles, a method for estimating prediction reliability in a *pre-trained* model, which approximates the variance of an ensemble (from that model class and training set), using only local second-order information.', ""Intuition: some directions around model parameters are flat (small Hessian eigenvalues), so perturbing in those directions won't raise training loss. We call this the ensemble subspace. https://t.co/KvnGctFrqf"", 'The models in the ensemble subspace are all similarly good with respect to the training loss; if their predictions at an input disagree, the model class is underdetermined by the training data (and loss function) at that input, and the prediction is probably unreliable.', 'For a test input, we can approximate the predictive variance of the models lying locally in the ensemble subspace by projecting the prediction gradient into it. We show empirically the norm of this projection (our extrapolation score) approximates true ensemble variance well. https://t.co/xlAZsB8FXz', 'See the paper for experimental results. For instance, active learning: we show that our method provides useful signal for selecting the next points to train on (higher scores mean more extrapolation means receiving a label would more useful) https://t.co/rzXCkHttep', 'Lots more technical details in the paper: how do we find the ensemble subspace (the Lanczos iteration and some tricks), relationships to other second-order methods (e.g. influence functions, Laplace approximation) and more!']",19,10,1989 |
35,4,1366647976974639106,3094610676,Pranav Rajpurkar,"Common practice for training medical imaging AI models is to use labels extracted from radiology reports. This assumes that report labels are good proxies for image labels. BUT are they? No! 😲 New paper🔎 <LINK> @saahil9jain @AkshaySmit @mattlungrenMD 1/9 <LINK> Overall, we investigate this discrepancy between radiology report labels and image labels. We develop a radiology report labeler, VisualCheXbert, that better agrees with radiologists labeling images than do radiologists labeling reports. 4 primary questions & findings > 2/9 Q1: Do radiologists labeling reports agree with radiologists labeling X-ray images? A: We find significant disagreement between radiologists labeling reports and radiologists labeling images. 3/9 <LINK> Q2: Why is there so much disagreement? A: We report multiple reasons, one of which is that radiologists labeling reports have access to clinical report history, which biases their diagnoses compared to radiologists labeling images who do not have access to this information. 4/9 <LINK> Q3: Are there significant relationships between report labels and image labels for different conditions? A: Yes! We report how the presence of a condition like Atelectasis in a report is related to the odds of a condition like Support Devices in an X-ray image. 5/9 <LINK> Q4: Importantly, can we learn to map reports directly to X-ray image labels? A: Yes! VisualCheXbert, uses a biomedically pretrained BERT model that is supervised by a computer vision model trained to detect diseases from chest X-rays. Info on VisualCheXbert’s performance > 6/9 <LINK> When evaluated with radiologist image labels on the CheXpert test set, VisualCheXbert obtains a statistically significant overall improvement over a commonly used radiology report labeler (the CheXpert labeler) as well as radiologists labeling reports. 7/9 <LINK> Our approach of supervising a report labeler with a vision model can be applied across domains. While previous labelers replicate radiologists labeling reports, VisualCheXbert improves over radiologists labeling reports on the more relevant task of producing image labels! 8/9 Had a great time working with stars and first authors @saahil9jain & @AkshaySmit. Great team of @steventruongq, Chanh DT Nguyen, Minh-Thanh Huynh, Mudit Jain, @VinBrainAI, and Victoria Young, @AndrewYNg @mattlungrenMD. Read more about our work here: <LINK> 9/9 @StanfordAILab @stanfordnlp",https://arxiv.org/abs/2102.11467,"Automatic extraction of medical conditions from free-text radiology reports is critical for supervising computer vision models to interpret medical images. In this work, we show that radiologists labeling reports significantly disagree with radiologists labeling corresponding chest X-ray images, which reduces the quality of report labels as proxies for image labels. We develop and evaluate methods to produce labels from radiology reports that have better agreement with radiologists labeling images. Our best performing method, called VisualCheXbert, uses a biomedically-pretrained BERT model to directly map from a radiology report to the image labels, with a supervisory signal determined by a computer vision model trained to detect medical conditions from chest X-ray images. We find that VisualCheXbert outperforms an approach using an existing radiology report labeler by an average F1 score of 0.14 (95% CI 0.12, 0.17). We also find that VisualCheXbert better agrees with radiologists labeling chest X-ray images than do radiologists labeling the corresponding radiology reports by an average F1 score across several medical conditions of between 0.12 (95% CI 0.09, 0.15) and 0.21 (95% CI 0.18, 0.24). ","VisualCheXbert: Addressing the Discrepancy Between Radiology Report |
Labels and Image Labels",10,"['Common practice for training medical imaging AI models is to use labels extracted from radiology reports.\n\nThis assumes that report labels are good proxies for image labels. BUT are they?\n\nNo! 😲\n\nNew paper🔎 <LINK>\n\n@saahil9jain @AkshaySmit @mattlungrenMD \n\n1/9 <LINK>', 'Overall, we investigate this discrepancy between radiology report labels and image labels.\n\nWe develop a radiology report labeler, VisualCheXbert, that better agrees with radiologists labeling images than do radiologists labeling reports. \n\n4 primary questions & findings >\n\n2/9', 'Q1: Do radiologists labeling reports agree with radiologists labeling X-ray images?\n\nA: We find significant disagreement between radiologists labeling reports and radiologists labeling images.\n\n3/9 https://t.co/3Ufzpr549K', 'Q2: Why is there so much disagreement?\n\nA: We report multiple reasons, one of which is that radiologists labeling reports have access to clinical report history, which biases their diagnoses compared to radiologists labeling images who do not have access to this information.\n\n4/9 https://t.co/OeXcObOeav', 'Q3: Are there significant relationships between report labels and image labels for different conditions?\n\nA: Yes! We report how the presence of a condition like Atelectasis in a report is related to the odds of a condition like Support Devices in an X-ray image.\n\n5/9 https://t.co/jJXoAtGi5A', 'Q4: Importantly, can we learn to map reports directly to X-ray image labels?\n\nA: Yes! VisualCheXbert, uses a biomedically pretrained BERT model that is supervised by a computer vision model trained to detect diseases from chest X-rays.\n\nInfo on VisualCheXbert’s performance >\n\n6/9 https://t.co/Lq6DS9j2RU', 'When evaluated with radiologist image labels on the CheXpert test set, VisualCheXbert obtains a statistically significant overall improvement over a commonly used radiology report labeler (the CheXpert labeler) as well as radiologists labeling reports.\n\n7/9 https://t.co/8z1Zr3o6Yc', 'Our approach of supervising a report labeler with a vision model can be applied across domains.\n \nWhile previous labelers replicate radiologists labeling reports, VisualCheXbert improves over radiologists labeling reports on the more relevant task of producing image labels!\n \n8/9', 'Had a great time working with stars and first authors @saahil9jain & @AkshaySmit.\n\nGreat team of @steventruongq, Chanh DT Nguyen, Minh-Thanh Huynh, Mudit Jain, @VinBrainAI, and Victoria Young, @AndrewYNg @mattlungrenMD.\n\nRead more about our work here: https://t.co/Ug17cCCt8C\n\n9/9', '@StanfordAILab @stanfordnlp']",21,02,2423 |
36,10,1510918086509137921,1144657431382958082,Jannis Kurtz,"If you are interested in #robustoptimization with discrete uncertainty sets maybe you want to check our new short paper: <LINK> With Marc Goerigk we study the problem of selecting start scenarios for iterative scenario generation methods for RO problems 1/n Our heuristic learns the relevance of a scenario by extracting information from training data. The main observation is that even choosing a single start scenario can lead to a significant benefit for the subsequent iterative process which was quite surprising for us. Hence using available training data can be useful. Unfortunately there is still no data basis for robust optimization instances. In our case each scenario has to be labeled to be relevant or not, which implies that the instance need to be solved to optimality to be labeled. We circumvent this problem by constructing instances where relevant scenarios are known by construction. However testing the idea for more realistic instances is desirable. Marc Goerigk recently started to collect instances <LINK> Feel free to submit instances!",https://arxiv.org/abs/2203.16642,"In this work we study robust one- and two-stage problems with discrete uncertainty sets which are known to be hard to solve even if the underlying deterministic problem is easy. Popular solution methods iteratively generate scenario constraints and possibly second-stage variables. This way, by solving a sequence of smaller problems, it is often possible to avoid the complexity of considering all scenarios simultaneously. A key ingredient for the performance of the iterative methods is a good selection of start scenarios. In this paper we propose a data-driven heuristic to seed the iterative solution method with a set of starting scenarios that provide a strong lower bound early in the process, and result in considerably smaller overall solution times compared to other benchmark methods. Our heuristic learns the relevance of a scenario by extracting information from training data based on a combined similarity measure between robust problem instances and single scenarios. Our experiments show that predicting even a small number of good start scenarios by our method can considerably reduce the computation time of the iterative methods. ",Data-driven Prediction of Relevant Scenarios for Robust Optimization,4,"['If you are interested in #robustoptimization with discrete uncertainty sets maybe you want to check our new short paper: <LINK>\n\nWith Marc Goerigk we study the problem of selecting start scenarios for iterative scenario generation methods for RO problems 1/n', 'Our heuristic learns the relevance of a scenario by extracting information from training data. The main observation is that even choosing a single start scenario can lead to a significant benefit for the subsequent iterative process which was quite surprising for us.', 'Hence using available training data can be useful. Unfortunately there is still no data basis for robust optimization instances. In our case each scenario has to be labeled to be relevant or not, which implies that the instance need to be solved to optimality to be labeled.', 'We circumvent this problem by constructing instances where relevant scenarios are known by construction. However testing the idea for more realistic instances is desirable. Marc Goerigk recently started to collect instances https://t.co/3QzYWa5abW Feel free to submit instances!']",22,03,1063 |
37,292,1321004483988623360,1085279811692621825,Luca Matrà,"And in another great work led by @Sebastromarino today, we find yet another radially wide planetesimal belt with a ⭐️GAP⭐️ around HD206893, hinting at a third companion orbiting the star at 74 au outer to the brown dwarfs HD206893B and C! What a system! <LINK> <LINK>",https://arxiv.org/abs/2010.12582,"Radial substructure in the form of rings and gaps has been shown to be ubiquitous among protoplanetary discs. This could be the case in exoKuiper belts as well, and evidence for this is emerging. In this paper we present ALMA observations of the debris/planetesimal disc surrounding HD 206893, a system that also hosts two massive companions at 2 and 11 au. Our observations reveal a disc extending from 30 to 180 au, split by a 27 au wide gap centred at 74 au, and no dust surrounding the reddened brown dwarf (BD) at 11 au. The gap width suggests the presence of a 0.9 M$_\mathrm{Jup}$ planet at 74 au, which would be the third companion in this system. Using previous astrometry of the BD, combined with our derived disc orientation as a prior, we were able to better constrain its orbit finding it is likely eccentric ($0.14^{+0.05}_{-0.04}$). For the innermost companion, we used RV, proper motion anomaly and stability considerations to show its mass and semi-major axis are likely in the range 4-100 M$_\mathrm{Jup}$ and 1.4-4.5 au. These three companions will interact on secular timescales and perturb the orbits of planetesimals, stirring the disc and potentially truncating it to its current extent via secular resonances. Finally, the presence of a gap in this system adds to the growing evidence that gaps could be common in wide exoKuiper belts. Out of 6 wide debris discs observed with ALMA with enough resolution, 4-5 show radial substructure in the form of gaps. ",Insights into the planetary dynamics of HD 206893 with ALMA,1,"['And in another great work led by @Sebastromarino today, we find yet another radially wide planetesimal belt with a ⭐️GAP⭐️ around HD206893, hinting at a third companion orbiting the star at 74 au outer to the brown dwarfs HD206893B and C! What a system!\n\n<LINK> <LINK>']",20,10,267 |
38,220,1446391470236254209,1244601028466655232,Hurault Samuel,"<LINK> We propose a new convergent Plug&Play (PnP) scheme with novel theoretical guarantees and state-of-the-art image restoration performance. We train a deep denoiser as an exact gradient descent step on a functional parameterized by a deep neural network. 1/3 Once plugged in a PnP framework, the resulting fixed-point algorithm is guaranteed to converge to a stationary point of an explicit functional. This PnP algorithm reaches state-of-the art on various ill-posed IR tasks like deblurring, super-resolution or inpainting. (2/3) Code available here : <LINK>",https://arxiv.org/abs/2110.03220,"Plug-and-Play methods constitute a class of iterative algorithms for imaging problems where regularization is performed by an off-the-shelf denoiser. Although Plug-and-Play methods can lead to tremendous visual performance for various image problems, the few existing convergence guarantees are based on unrealistic (or suboptimal) hypotheses on the denoiser, or limited to strongly convex data terms. In this work, we propose a new type of Plug-and-Play methods, based on half-quadratic splitting, for which the denoiser is realized as a gradient descent step on a functional parameterized by a deep neural network. Exploiting convergence results for proximal gradient descent algorithms in the non-convex setting, we show that the proposed Plug-and-Play algorithm is a convergent iterative scheme that targets stationary points of an explicit global functional. Besides, experiments show that it is possible to learn such a deep denoiser while not compromising the performance in comparison to other state-of-the-art deep denoisers used in Plug-and-Play schemes. We apply our proximal gradient algorithm to various ill-posed inverse problems, e.g. deblurring, super-resolution and inpainting. For all these applications, numerical results empirically confirm the convergence results. Experiments also show that this new algorithm reaches state-of-the-art performance, both quantitatively and qualitatively. ",Gradient Step Denoiser for convergent Plug-and-Play,3,"['<LINK>\n\nWe propose a new convergent Plug&Play (PnP) scheme with novel theoretical guarantees and state-of-the-art image restoration performance. We train a deep denoiser as an exact gradient descent step on a functional parameterized by a deep neural network. 1/3', 'Once plugged in a PnP framework, the resulting fixed-point algorithm is guaranteed to converge to a stationary point of an explicit functional. This PnP algorithm reaches state-of-the art on various ill-posed IR tasks like deblurring, super-resolution or inpainting. (2/3)', 'Code available here : https://t.co/dklTNYoJRr']",21,10,564 |
39,19,1067918096046927872,7352832,Theo Weber,"New paper on using counterfactual inference for RL! <LINK>. In contrast with ‘classical’ model-based RL questions of type 'what would happen if', counterfactual reasoning's 'what would have happened if' simulates low-bias alternative outcomes of real experience.",https://arxiv.org/abs/1811.06272,"Learning policies on data synthesized by models can in principle quench the thirst of reinforcement learning algorithms for large amounts of real experience, which is often costly to acquire. However, simulating plausible experience de novo is a hard problem for many complex environments, often resulting in biases for model-based policy evaluation and search. Instead of de novo synthesis of data, here we assume logged, real experience and model alternative outcomes of this experience under counterfactual actions, actions that were not actually taken. Based on this, we propose the Counterfactually-Guided Policy Search (CF-GPS) algorithm for learning policies in POMDPs from off-policy experience. It leverages structural causal models for counterfactual evaluation of arbitrary policies on individual off-policy episodes. CF-GPS can improve on vanilla model-based RL algorithms by making use of available logged data to de-bias model predictions. In contrast to off-policy algorithms based on Importance Sampling which re-weight data, CF-GPS leverages a model to explicitly consider alternative outcomes, allowing the algorithm to make better use of experience data. We find empirically that these advantages translate into improved policy evaluation and search results on a non-trivial grid-world task. Finally, we show that CF-GPS generalizes the previously proposed Guided Policy Search and that reparameterization-based algorithms such Stochastic Value Gradient can be interpreted as counterfactual methods. ","Woulda, Coulda, Shoulda: Counterfactually-Guided Policy Search",1,"[""New paper on using counterfactual inference for RL! <LINK>. In contrast with ‘classical’ model-based RL questions of type 'what would happen if', counterfactual reasoning's 'what would have happened if' simulates low-bias alternative outcomes of real experience.""]",18,11,262 |
40,98,1273519079593312257,481539448,Richard Alexander,"New paper, led by @PhysicsUoL PhD student (and soon-to-be @QMPlanets postdoc) @GBallabio. Giulia created a new method to understand how emission lines can measure the properties of disc winds, so we can figure out how planet-forming discs lose mass. <LINK> This is an old subject, but the observations are complex, and it's hard to interpret them without a theoretical framework. Giulia's new models provide a clear new insight into what can - and can't - be determined from these observations. One key result is that the observed [NeII] 12.81um lines, which trace photoionized gas, are consistent with an isothermal wind with a sound speed of ~10km/s. <LINK> However, the [OI] 6300Å lines, which trace neutral gas, are trickier - we can't match both the blue-shifts and widths with a single model. So either the wind is highly non-isothermal, or the different lines are actually probing different parts of a multi-component disc wind.",https://arxiv.org/abs/2006.09811,"Photoevaporation driven by high energy radiation from the central star plays an important role in the evolution of protoplanetary discs. Photoevaporative winds have been unambiguously detected through blue-shifted emission lines, but their detailed properties remain uncertain. Here we present a new empirical approach to make observational predictions of these thermal winds, seeking to fill the gap between theory and observations. We use a self-similar model of an isothermal wind to compute line profiles of several characteristic emission lines (the [Ne${\rm{\scriptsize II}}$] line at 12.81 $\mu$m, and optical forbidden lines such as [O${\rm{\scriptsize I}}$] 6300 $\mathring{A}$ and [S${\rm{\scriptsize II}}$] 4068/4076 $\mathring{A}$), studying how the lines are affected by parameters such as the gas temperature, disc inclinations, and density profile. Our model successfully reproduces blue-shifted lines with $v_{\rm peak} \lesssim 10$ km/s, which decrease with increasing disc inclination. The line widths increase with increasing disc inclinations and range from $\Delta v \sim 15-30$ km/s. The predicted blue-shifts are mostly sensitive to the gas sound speed. The observed [Ne${\rm{\scriptsize II}}$] line profiles are consistent with a thermal wind and point towards a relatively high sound speed, as expected for EUV photoevaporation. However, the observed [O${\rm{\scriptsize I}}$] line profiles require lower temperatures, as expected in X-ray photoevaporation, and show a wider scatter that is difficult to reconcile with a single wind model; it seems likely that these lines trace different components of a multi-phase wind. We also note that the spectral resolution of current observations remains an important limiting factor in these studies, and that higher resolution spectra are required if emission lines are to further our understanding of protoplanetary disc winds. ",Forbidden line diagnostics of photoevaporative disc winds,4,"['New paper, led by @PhysicsUoL PhD student (and soon-to-be @QMPlanets postdoc) @GBallabio. Giulia created a new method to understand how emission lines can measure the properties of disc winds, so we can figure out how planet-forming discs lose mass.\n<LINK>', ""This is an old subject, but the observations are complex, and it's hard to interpret them without a theoretical framework. Giulia's new models provide a clear new insight into what can - and can't - be determined from these observations."", 'One key result is that the observed [NeII] 12.81um lines, which trace photoionized gas, are consistent with an isothermal wind with a sound speed of ~10km/s. https://t.co/E1X3zO1FJ9', ""However, the [OI] 6300Å lines, which trace neutral gas, are trickier - we can't match both the blue-shifts and widths with a single model. So either the wind is highly non-isothermal, or the different lines are actually probing different parts of a multi-component disc wind.""]",20,06,935 |
41,34,1519788983852711937,886443925,Philipp Siedler,"New paper on Multi-Agent #ReinforcementLearning (MARL) solving a wild-fire management resource distribution task utilising collaboration, auto-curricula training and an openended environment. Accepted at @GamificationMAS workshop @iclr_conf: <LINK> 1/2 <LINK> Huge thanks to @serkancabi, @ptigas for guidance and sharpness, @drimgemp for igniting the context idea, @a_tacchetti and the Gamification and Multiagent solutions reviewers for their time and effort, and finally to @jasmin_ for supporting yet another hobby project of mine 🤖 2/2",https://arxiv.org/abs/2204.11350,"Most real-world domains can be formulated as multi-agent (MA) systems. Intentionality sharing agents can solve more complex tasks by collaborating, possibly in less time. True cooperative actions are beneficial for egoistic and collective reasons. However, teaching individual agents to sacrifice egoistic benefits for a better collective performance seems challenging. We build on a recently proposed Multi-Agent Reinforcement Learning (MARL) mechanism with a Graph Neural Network (GNN) communication layer. Rarely chosen communication actions were marginally beneficial. Here we propose a MARL system in which agents can help collaborators perform better while risking low individual performance. We conduct our study in the context of resource distribution for wildfire management. Communicating environmental features and partially observable fire occurrence help the agent collective to pre-emptively distribute resources. Furthermore, we introduce a procedural training environment accommodating auto-curricula and open-endedness towards better generalizability. Our MA communication proposal outperforms a Greedy Heuristic Baseline and a Single-Agent (SA) setup. We further demonstrate how auto-curricula and openendedness improves generalizability of our MA proposal. ","Collaborative Auto-Curricula Multi-Agent Reinforcement Learning with |
Graph Neural Network Communication Layer for Open-ended Wildfire-Management |
Resource Distribution",2,"['New paper on Multi-Agent #ReinforcementLearning (MARL) solving a wild-fire management resource distribution task utilising collaboration, auto-curricula training and an openended environment. Accepted at @GamificationMAS workshop @iclr_conf: <LINK> 1/2 <LINK>', 'Huge thanks to @serkancabi, @ptigas for guidance and sharpness, @drimgemp for igniting the context idea, @a_tacchetti and the Gamification and Multiagent solutions reviewers for their time and effort, and finally to @jasmin_ for supporting yet another hobby project of mine 🤖 2/2']",22,04,539 |
42,88,1448717928262615040,1316159755254075392,Moya Chen,New paper - Teach E2E task-oriented models use of new APIs by generating synthetic dialogues with a user simulator; self-train on the successful ones with maybe a sprinkle of examples. Repeat. Profit. Paper: <LINK> See thread below for results + highlights. <LINK>,https://arxiv.org/abs/2110.06905,"We demonstrate that large language models are able to simulate Task Oriented Dialogues in novel domains, provided only with an API implementation and a list of goals. We show these simulations can formulate online, automatic metrics that correlate well with human evaluations. Furthermore, by checking for whether the User's goals are met, we can use simulation to repeatedly generate training data and improve the quality of simulations themselves. With no human intervention or domain-specific training data, our simulations bootstrap end-to-end models which achieve a 37\% error reduction in previously unseen domains. By including as few as 32 domain-specific conversations, bootstrapped models can match the performance of a fully-supervised model with $10\times$ more data. To our knowledge, this is the first time simulations have been shown to be effective at bootstrapping models without explicitly requiring any domain-specific training data, rule-engineering, or humans-in-the-loop. ","Teaching Models new APIs: Domain-Agnostic Simulators for Task Oriented |
Dialogue",1,['New paper - Teach E2E task-oriented models use of new APIs by generating synthetic dialogues with a user simulator; self-train on the successful ones with maybe a sprinkle of examples. Repeat. Profit. \n\nPaper: <LINK>\nSee thread below for results + highlights. <LINK>'],21,10,265 |
43,156,1333736004759523335,426509606,Yamir Moreno,"New preprint out today ""Induced Percolation on Networked Systems"" (<LINK>). We propose a new percolation framework that accounts for indirect influence, finding a rich variety of 1st-order, 2nd-order, and hybrid phase transitions. w/ @xiangrongwang et al. <LINK>",https://arxiv.org/abs/2011.14034,"Percolation theory has been widely used to study phase transitions in complex networked systems. It has also successfully explained several macroscopic phenomena across different fields. Yet, the existent theoretical framework for percolation places the focus on the direct interactions among the system's components, while recent empirical observations have shown that indirect interactions are common in many systems like ecological and social networks, among others. Here, we propose a new percolation framework that accounts for indirect interactions, which allows to generalize the current theoretical body and understand the role of the underlying indirect influence of the components of a networked system on its macroscopic behavior. We report a rich phenomenology in which first-order, second-order or hybrid phase transitions are possible depending on whether the links of the substrate network are directed, undirected or a mix, respectively. We also present an analytical framework to characterize the proposed induced percolation, paving the way to further understand network dynamics with indirect interactions. ",Induced Percolation on Networked Systems,1,"['New preprint out today ""Induced Percolation on Networked Systems"" (<LINK>). We propose a new percolation framework that accounts for indirect influence, finding a rich variety of 1st-order, 2nd-order, and hybrid phase transitions. w/ @xiangrongwang et al. <LINK>']",20,11,262 |
44,178,1455131794169270275,995661858508976129,Xingang Pan,"Our #NeurIPS2021 paper ShadeGAN is released at <LINK>. While most 3D-aware GANs are trained via the multi-view constraint, we propose a multi-lighting constraint that resolves the shape-color ambiguity and leads to more accurate 3D shapes! <LINK> <LINK>",https://arxiv.org/abs/2110.15678,"The advancement of generative radiance fields has pushed the boundary of 3D-aware image synthesis. Motivated by the observation that a 3D object should look realistic from multiple viewpoints, these methods introduce a multi-view constraint as regularization to learn valid 3D radiance fields from 2D images. Despite the progress, they often fall short of capturing accurate 3D shapes due to the shape-color ambiguity, limiting their applicability in downstream tasks. In this work, we address this ambiguity by proposing a novel shading-guided generative implicit model that is able to learn a starkly improved shape representation. Our key insight is that an accurate 3D shape should also yield a realistic rendering under different lighting conditions. This multi-lighting constraint is realized by modeling illumination explicitly and performing shading with various lighting conditions. Gradients are derived by feeding the synthesized images to a discriminator. To compensate for the additional computational burden of calculating surface normals, we further devise an efficient volume rendering strategy via surface tracking, reducing the training and inference time by 24% and 48%, respectively. Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis while capturing accurate underlying 3D shapes. We demonstrate improved performance of our approach on 3D shape reconstruction against existing methods, and show its applicability on image relighting. Our code will be released at this https URL ","A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware |
Image Synthesis",1,"['Our #NeurIPS2021 paper ShadeGAN is released at <LINK>.\nWhile most 3D-aware GANs are trained via the multi-view constraint, we propose a multi-lighting constraint that resolves the shape-color ambiguity and leads to more accurate 3D shapes!\n<LINK> <LINK>']",21,10,253 |
45,35,1495857044439826433,106464999,Kat Volk,"hey, do you need to know the free inclinations of things in the main classical Kuiper belt? (free inclinations are way more useful than ecliptic inclinations!) Check out our new paper led by @yukun_huang that has everything you need! <LINK> I made a simple animation for a review paper to demonstrate the concept of free inclinations (it's based on a simplified analytical theory, so what Yukun did for our above paper is much more accurate!) <LINK> the main concept is that the gravitational influence of the giant planets defines a plane about which a small body's orbit will wobble. The observed inclination relative to our arbitrary choice of the ecliptic as a reference plane will change a lot over time because of this... but if we calculate that plane set by the influence of the planets (which changes depending on the small body's other orbital parameters!) and remove it, we get an inclination that is much more stable over time and can reflect .... the original amount of orbital excitation that object experienced in the early solar system! So we can more easily tell the difference between things that have always been on very low-inclination orbits (cold classical Kuiper Belt Objects) and those that have been tossed around! @AlphaCatPA @yukun_huang lots and lots of things! and their amount of inclination tells us something about their history! @AlphaCatPA @yukun_huang lol, close! The more tipsy ones probably formed closer to the sun and got scattered out and implanted whereas the less tipsy ones probably formed where we see them today! @AlphaCatPA @ItsMeDeaner yes! I'm going to have to look at those to see if they have them online! (Not that I need more pajama pants after my winter lands end spree.....) @yukun_huang Thanks for doing such a great job getting it done and published so quickly!",https://arxiv.org/abs/2202.09045,"There is a complex inclination structure present in the transneptunian object (TNO) orbital distribution in the main classical belt region (between orbital semimajor axes of 39 and 48 au). The long-term gravitational effects of the giant planets make TNO orbits precess, but non-resonant objects maintain a nearly constant 'free' inclination ($I_\text{free}$) with respect to a local forced precession pole. Because of the likely cosmogonic importance of the distribution of this quantity, we tabulate free inclinations for all main-belt TNOs, each individually computed using barycentric orbital elements with respect to each object's local forcing pole. We show that the simplest method, based on the Laplace-Lagrange secular theory, is unable to give correct forcing poles for objects near the $\nu_{18}$ secular resonance, resulting in poorly conserved $I_\text{free}$ values in much of the main belt. We thus instead implemented an averaged Hamiltonian to obtain the expected nodal precession for each TNO, yielding significantly more accurate free inclinations for non-resonant objects. For the vast majority (96\%) of classical belt TNOs, these $I_\text{free}$ values are conserved to $<1^\circ$ over 4 Gyr numerical simulations, demonstrating the advantage of using this well-conserved quantity in studies of the TNO population and its primordial inclination profile; our computed distributions only reinforce the idea of a very co-planar surviving 'cold' primordial population, overlain by a large $I$-width implanted 'hot' population. ",Free Inclinations for Transneptunian Objects in the Main Kuiper Belt,9,"['hey, do you need to know the free inclinations of things in the main classical Kuiper belt? (free inclinations are way more useful than ecliptic inclinations!) Check out our new paper led by @yukun_huang that has everything you need! <LINK>', ""I made a simple animation for a review paper to demonstrate the concept of free inclinations (it's based on a simplified analytical theory, so what Yukun did for our above paper is much more accurate!) https://t.co/0SWOEvfPmG"", ""the main concept is that the gravitational influence of the giant planets defines a plane about which a small body's orbit will wobble. The observed inclination relative to our arbitrary choice of the ecliptic as a reference plane will change a lot over time because of this..."", ""but if we calculate that plane set by the influence of the planets (which changes depending on the small body's other orbital parameters!) and remove it, we get an inclination that is much more stable over time and can reflect ...."", 'the original amount of orbital excitation that object experienced in the early solar system! So we can more easily tell the difference between things that have always been on very low-inclination orbits (cold classical Kuiper Belt Objects) and those that have been tossed around!', '@AlphaCatPA @yukun_huang lots and lots of things! and their amount of inclination tells us something about their history!', '@AlphaCatPA @yukun_huang lol, close! The more tipsy ones probably formed closer to the sun and got scattered out and implanted whereas the less tipsy ones probably formed where we see them today!', ""@AlphaCatPA @ItsMeDeaner yes! I'm going to have to look at those to see if they have them online! (Not that I need more pajama pants after my winter lands end spree.....)"", '@yukun_huang Thanks for doing such a great job getting it done and published so quickly!']",22,02,1817 |
46,40,1419902117758881792,10666172,Sabine Hossenfelder,"New paper, comments welcome <LINK> <LINK> @anothernoki Recht auf körperliche Unversehrtheit. Wenn Du Dich nicht impfen lassen willst weil es Dir egal ist ob Du an COVID stirbst, ist das deine Sache. Wenn Du andere Leute damit in Gefahr bringst nicht. @anothernoki was redest Du für einen Schrott über ""medikamentösen Eingriff ohne Einverständnis des Patienten""? Verwechselst Du mich mit jemand oder kannst Du nur nicht lesen? @anothernoki @mcMuck Kompletter Quatsch, keiner wirft Leute, die aus medizinischen Gründen nicht geimpft werden können zusammen mit denen, die andere Leute (zB, die, die man nicht Impfen kann...) absichtlich in Gefahr bringen, weil sie sich nicht Impfen lassen wollen. Totales Straw-man Argument.",https://arxiv.org/abs/2107.11497,"Cosmology relies on a coarse-grained description of the universe, assumed to be valid on large length scales. However, the nonlinearity of general relativity makes coarse-graining extremely difficult. We here address this problem by extending the Mori-Zwanzig projection operator formalism, a highly successful coarse-graining method from statistical mechanics, towards general relativity. Using the Buchert equations, we derive a new dynamic equation for the Hubble parameter which captures the effects of averaging through a memory function. This gives an empirical prediction for the cosmic jerk. ","Mori-Zwanzig formalism for general relativity: a new approach to the |
averaging problem",4,"['New paper, comments welcome <LINK> <LINK>', '@anothernoki Recht auf körperliche Unversehrtheit. Wenn Du Dich nicht impfen lassen willst weil es Dir egal ist ob Du an COVID stirbst, ist das deine Sache. Wenn Du andere Leute damit in Gefahr bringst nicht.', '@anothernoki was redest Du für einen Schrott über ""medikamentösen Eingriff ohne Einverständnis des Patienten""? Verwechselst Du mich mit jemand oder kannst Du nur nicht lesen?', '@anothernoki @mcMuck Kompletter Quatsch, keiner wirft Leute, die aus medizinischen Gründen nicht geimpft werden können zusammen mit denen, die andere Leute (zB, die, die man nicht Impfen kann...) absichtlich in Gefahr bringen, weil sie sich nicht Impfen lassen wollen. Totales Straw-man Argument.']",21,07,722 |
47,227,1407419464723935238,301450268,Mireia Montes,"Paper day!! Today we focus on the ""enigmatic"" galaxy ""lacking"" dark matter: NGC1052-DF2 (the first one). You can find the accepted paper here: <LINK> @infantesainz @asborlaff <LINK> First, we investigated the globular cluster system of this galaxy and we found that all but two of them are contained in the stellar body of the galaxy. That is different to what we found for DF4. <LINK> This + that the galaxy does not show signs of interaction in VERY DEEP imaging is telling us that the galaxy is not interacting with anything. <LINK> So what is going on here? Well, @Cosmic_Horizons published a paper finding evidence for rotation in the GCs of the galaxy. The claimed total mass of the galaxy was derived assuming that this was not the case. In our deep images we see that the galaxy, in the outer parts, resembles a disk. The inner parts, what was seen till now, are rounder and spheroidal (Sersic profile n = 0.6, pink), but the outer parts are more elliptical and disky (n =~1, black). <LINK> When we put the rotation map on top our images we can see that the position angle of both things (rotation and stars) coincide, strongly suggesting that the galaxy is rotating. <LINK> This is independent of the distance assumed for this galaxy (which is still controversial). For 20 Mpc there is a factor of 2.6 in total mass, while at 13 Mpc is a factor of 1.5. So: more total mass = more dark matter. To sum up: - DF2 is rotating. - DF4 is interacting with a neighbor. Two different explanations for two different galaxies. 🤷♀️ All this is thanks to the newest Dr @infantesainz for his expertise in data reduction. He is giving a talk at the EAS 2021 S12 that you should listen to. Also @asborlaff for his HST images. He is also giving a talk at EAS S12. Please appreciate, again, the color scheme of the paper. It is complementary to that of DF4's paper. @DanieliShany I don't understand why you say that I am ignoring the paper when it is cited in the paper and all the calculations/estimations are done using both distances. @DanieliShany We don't refer to Danieli+2020 because we are barely talking about DF4 in this paper. Only to refer to parts of the analysis that are in common with my previous paper. For Shen+2021b is here: <LINK> @DanieliShany We only cite Monelli & Trujillo 2019 when referring to the distance to DF2 and NGC 1042. @astrocra VERY VERY",https://arxiv.org/abs/2106.10283,"Using ultra-deep imaging ($\mu_g = 30.4$ mag/arcsec$^2$; 3$\sigma$, 10""x10""), we probed the surroundings of the first galaxy ""lacking"" dark matter KKS2000[04] (NGC 1052-DF2). Signs of tidal stripping in this galaxy would explain its claimed low content of dark matter. However, we find no evidence of tidal tails. In fact, the galaxy remains undisturbed down to a radial distance of 80 arcsec. This radial distance triples previous spatial explorations of the stellar distribution of this galaxy. In addition, the distribution of its globular clusters (GCs) is not extended in relation to the bulk of the galaxy (the radius containing half of the GCs is 21 arcsec). We also found that the surface brightness radial profiles of this galaxy in the g and r bands decline exponentially from 35 to 80 arcsec. That, together with a constant ellipticity and position angle in the outer parts of the galaxy strongly suggests the presence of a low-inclination disk. This is consistent with the evidence of rotation found for this object. This finding implies that the dynamical mass of this galaxy is a factor of 2 higher than previously reported, bringing the dark matter content of this galaxy in line with galaxies of similar stellar mass. ","A disk and no signatures of tidal distortion in the galaxy ""lacking"" |
dark matter NGC 1052-DF2",14,"['Paper day!! Today we focus on the ""enigmatic"" galaxy ""lacking"" dark matter: NGC1052-DF2 (the first one). You can find the accepted paper here: <LINK>\n@infantesainz @asborlaff <LINK>', 'First, we investigated the globular cluster system of this galaxy and we found that all but two of them are contained in the stellar body of the galaxy. That is different to what we found for DF4. https://t.co/X7PSx27dAM', 'This + that the galaxy does not show signs of interaction in VERY DEEP imaging is telling us that the galaxy is not interacting with anything. https://t.co/PxGYaIU9D2', 'So what is going on here? Well, @Cosmic_Horizons published a paper finding evidence for rotation in the GCs of the galaxy. The claimed total mass of the galaxy was derived assuming that this was not the case.', 'In our deep images we see that the galaxy, in the outer parts, resembles a disk. The inner parts, what was seen till now, are rounder and spheroidal (Sersic profile n = 0.6, pink), but the outer parts are more elliptical and disky (n =~1, black). https://t.co/EOGrF5ci9m', 'When we put the rotation map on top our images we can see that the position angle of both things (rotation and stars) coincide, strongly suggesting that the galaxy is rotating. https://t.co/3LmENJxyXW', 'This is independent of the distance assumed for this galaxy (which is still controversial). For 20 Mpc there is a factor of 2.6 in total mass, while at 13 Mpc is a factor of 1.5. So: more total mass = more dark matter.', 'To sum up: \n- DF2 is rotating.\n- DF4 is interacting with a neighbor.\nTwo different explanations for two different galaxies. 🤷\u200d♀️', 'All this is thanks to the newest Dr @infantesainz for his expertise in data reduction. He is giving a talk at the EAS 2021 S12 that you should listen to. \nAlso @asborlaff for his HST images. He is also giving a talk at EAS S12.', ""Please appreciate, again, the color scheme of the paper. It is complementary to that of DF4's paper."", ""@DanieliShany I don't understand why you say that I am ignoring the paper when it is cited in the paper and all the calculations/estimations are done using both distances."", ""@DanieliShany We don't refer to Danieli+2020 because we are barely talking about DF4 in this paper. Only to refer to parts of the analysis that are in common with my previous paper. For Shen+2021b is here: https://t.co/a3DCG6abBk"", '@DanieliShany We only cite Monelli & Trujillo 2019 when referring to the distance to DF2 and NGC 1042.', '@astrocra VERY VERY']",21,06,2365 |
48,193,1380442667964850179,2739464881,Edward Bryant,"What's this? Large TTVs detected for an extremely low density, long period planet?! Check out the paper - <LINK> - to find out how we did this using data from @NextGenTransits and other telescopes, and what the future holds for the fascinating HIP41378 system <LINK>",https://arxiv.org/abs/2104.03159,"HIP 41378 f is a temperate $9.2\pm0.1 R_{\oplus}$ planet with period of 542.08 days and an extremely low density of $0.09\pm0.02$ g cm$^{-3}$. It transits the bright star HIP 41378 (V=8.93), making it an exciting target for atmospheric characterization including transmission spectroscopy. HIP 41378 was monitored photometrically between the dates of 2019 November 19 and November 28. We detected a transit of HIP 41378 f with NGTS, just the third transit ever detected for this planet, which confirms the orbital period. This is also the first ground-based detection of a transit of HIP 41378 f. Additional ground-based photometry was also obtained and used to constrain the time of the transit. The transit was measured to occur 1.50 hours earlier than predicted. We use an analytic transit timing variation (TTV) model to show the observed TTV can be explained by interactions between HIP 41378 e and HIP 41378 f. Using our TTV model, we predict the epochs of future transits of HIP 41378 f, with derived transit centres of T$_{C,4} = 2459355.087^{+0.031}_{-0.022}$ (May 2021) and T$_{C,5} = 2459897.078^{+0.114}_{-0.060}$ (Nov 2022). ","A transit timing variation observed for the long-period extremely low |
density exoplanet HIP 41378f",1,"[""What's this? Large TTVs detected for an extremely low density, long period planet?! \n\nCheck out the paper - <LINK> - to find out how we did this using data from @NextGenTransits and other telescopes, and what the future holds for the fascinating HIP41378 system <LINK>""]",21,04,267 |
49,42,1519352654899691520,1158317142447706112,Stephanie Brandl,"You can find our new paper “How Conservative are Language Models? Adapting to the Introduction of Gender-Neutral Pronouns” which we will present at @naaclmeeting now on arxiv: <LINK>. TL,DR: In this paper, we show that gender-neutral pronouns in Danish, English and Swedish are associated with higher perplexity, more dispersed attention patterns, and worse downstream performance in NLI and coreference resolution. We see a clear increase in perplexity for gender-neutral pronouns in comparison to gendered pronouns such as she/he and a correlation between attention flow and perplexity, particularly in later layers. This suggests that there is some development across layers that is stronger for gender-neutral pronouns than for gendered pronouns <LINK> Furthermore, we observe a drastic drop in performance in pronoun resolution in English <LINK> as well as a smaller decrease for Danish where we applied the full coreference resolution <LINK> We argue that such conservativity in language models may limit widespread adoption of gender-neutral pronouns and must therefore be resolved. This is joint work with @ruixiangcui and Anders Søgaard @coastalcph",https://arxiv.org/abs/2204.10281,"Gender-neutral pronouns have recently been introduced in many languages to a) include non-binary people and b) as a generic singular. Recent results from psycholinguistics suggest that gender-neutral pronouns (in Swedish) are not associated with human processing difficulties. This, we show, is in sharp contrast with automated processing. We show that gender-neutral pronouns in Danish, English, and Swedish are associated with higher perplexity, more dispersed attention patterns, and worse downstream performance. We argue that such conservativity in language models may limit widespread adoption of gender-neutral pronouns and must therefore be resolved. ","How Conservative are Language Models? Adapting to the Introduction of |
Gender-Neutral Pronouns",6,"['You can find our new paper “How Conservative are Language Models? Adapting to the Introduction of Gender-Neutral Pronouns” which we will present at @naaclmeeting now on arxiv: <LINK>. \nTL,DR: In this paper, we show that gender-neutral pronouns in Danish, English', 'and Swedish are associated with higher perplexity, more dispersed attention patterns, and worse downstream performance in NLI and coreference resolution.\nWe see a clear increase in perplexity for gender-neutral pronouns in comparison to gendered pronouns such', 'as she/he and a correlation between attention flow and perplexity, particularly in later layers. This suggests that there is some development across layers\nthat is stronger for gender-neutral pronouns than for gendered pronouns https://t.co/vJ77wJ8pFA', 'Furthermore, we observe a drastic drop in performance in pronoun resolution in English https://t.co/GG9xAofanl', 'as well as a smaller decrease for Danish where we applied the full coreference resolution https://t.co/ZWCKMNc4BI', 'We argue that such conservativity in language models may limit widespread adoption of gender-neutral pronouns and must therefore be resolved.\nThis is joint work with @ruixiangcui and Anders Søgaard @coastalcph']",22,04,1157 |
50,50,1519119528835567617,107357817,daisukelab,"Our new paper is out! We explored a simple masked patch modeling w/o augmentation to learn a latent that describes the input spectrogram as it is. “Masked Spectrogram Modeling using Masked Autoencoders for Learning General-purpose Audio Representation” <LINK> We used MAE with spectrogram input, evaluated the HEAR 2021 benchmark, and visualized reconstructions and self-attention maps to understand learned features. (I like visualizations.♥) <LINK>",https://arxiv.org/abs/2204.12260,"Recent general-purpose audio representations show state-of-the-art performance on various audio tasks. These representations are pre-trained by self-supervised learning methods that create training signals from the input. For example, typical audio contrastive learning uses temporal relationships among input sounds to create training signals, whereas some methods use a difference among input views created by data augmentations. However, these training signals do not provide information derived from the intact input sound, which we think is suboptimal for learning representation that describes the input as it is. In this paper, we seek to learn audio representations from the input itself as supervision using a pretext task of auto-encoding of masked spectrogram patches, Masked Spectrogram Modeling (MSM, a variant of Masked Image Modeling applied to audio spectrogram). To implement MSM, we use Masked Autoencoders (MAE), an image self-supervised learning method. MAE learns to efficiently encode the small number of visible patches into latent representations to carry essential information for reconstructing a large number of masked patches. While training, MAE minimizes the reconstruction error, which uses the input as training signal, consequently achieving our goal. We conducted experiments on our MSM using MAE (MSM-MAE) models under the evaluation benchmark of the HEAR 2021 NeurIPS Challenge. Our MSM-MAE models outperformed the HEAR 2021 Challenge results on seven out of 15 tasks (e.g., accuracies of 73.4% on CREMA-D and 85.8% on LibriCount), while showing top performance on other tasks where specialized models perform better. We also investigate how the design choices of MSM-MAE impact the performance and conduct qualitative analysis of visualization outcomes to gain an understanding of learned representations. We make our code available online. ","Masked Spectrogram Modeling using Masked Autoencoders for Learning |
General-purpose Audio Representation",2,"['Our new paper is out! We explored a simple masked patch modeling w/o augmentation to learn a latent that describes the input spectrogram as it is.\n“Masked Spectrogram Modeling using Masked Autoencoders for Learning General-purpose Audio Representation”\n<LINK>', 'We used MAE with spectrogram input, evaluated the HEAR 2021 benchmark, and visualized reconstructions and self-attention maps to understand learned features. (I like visualizations.♥) https://t.co/aMlZp6QBfg']",22,04,450 |
51,154,1384512694183661572,65432023,Prof. Costas Andreopoulos,New #neutrino x-section pheno paper by my PhD student Jùlia Tena-Vidal <LINK>. First in a series of papers on the new tunes of the well-known #GENIE MC. Next: Hadronization. Funded by @livuniphysics LIV.DAT. Initial work supported by an @IPPP_Durham associateship,https://arxiv.org/abs/2104.09179,"We summarise the results of a study performed within the GENIE global analysis framework, revisiting the GENIE bare-nucleon cross-section tuning and, in particular, the tuning of a) the inclusive cross-section, b) the cross-section of low-multiplicity inelastic channels (single-pion and double-pion production), and c) the relative contributions of resonance and non-resonance processes to these final states. The same analysis was performed with several different comprehensive cross-section model sets available in GENIE Generator v3. In this work we performed a careful investigation of the observed tensions between exclusive and inclusive data, and installed analysis improvements to handle systematics in historic data. All tuned model configurations discussed in this paper are available through public releases of the GENIE Generator. With this paper we aim to support the consumers of these physics tunes by providing comprehensive summaries of our alternate model constructions, of the relevant datasets and their systematics, and of our tuning procedure and results. ",Neutrino-Nucleon Cross-Section Model Tuning in GENIE v3,1,['New #neutrino x-section pheno paper by my PhD student Jùlia Tena-Vidal <LINK>. First in a series of papers on the new tunes of the well-known #GENIE MC. Next: Hadronization. Funded by @livuniphysics LIV.DAT. Initial work supported by an @IPPP_Durham associateship'],21,04,263 |
52,277,1322301181687857152,888216099757490176,Maithra Raghu,"Do Wide and Deep neural networks Learn the Same Things? Paper: <LINK> We study representational properties of neural networks with different depths and widths on CIFAR/ImageNet, with insights on model capacity effects, feature similarity & characteristic errors <LINK> @OriolVinyalsML My (maybe weak) excuse is that we'd like people to skim the paper (or read the summary thread!) to understand some of the nuances in the conclusions! 🙂",https://arxiv.org/abs/2010.15327,"A key factor in the success of deep neural networks is the ability to scale models to improve performance by varying the architecture depth and width. This simple property of neural network design has resulted in highly effective architectures for a variety of tasks. Nevertheless, there is limited understanding of effects of depth and width on the learned representations. In this paper, we study this fundamental question. We begin by investigating how varying depth and width affects model hidden representations, finding a characteristic block structure in the hidden representations of larger capacity (wider or deeper) models. We demonstrate that this block structure arises when model capacity is large relative to the size of the training set, and is indicative of the underlying layers preserving and propagating the dominant principal component of their representations. This discovery has important ramifications for features learned by different models, namely, representations outside the block structure are often similar across architectures with varying widths and depths, but the block structure is unique to each model. We analyze the output predictions of different model architectures, finding that even when the overall accuracy is similar, wide and deep models exhibit distinctive error patterns and variations across classes. ","Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural |
Network Representations Vary with Width and Depth",2,"['Do Wide and Deep neural networks Learn the Same Things? \nPaper: <LINK>\n\nWe study representational properties of neural networks with different depths and widths on CIFAR/ImageNet, with insights on model capacity effects, feature similarity & characteristic errors <LINK>', ""@OriolVinyalsML My (maybe weak) excuse is that we'd like people to skim the paper (or read the summary thread!) to understand some of the nuances in the conclusions! 🙂""]",20,10,436 |
53,67,1061348205634314240,250846437,Nouha Dziri,"Our paper is now on Arxiv! <LINK> with @korymath, @ehsk0 and @ozaiane. We introduce THRED model which can generate diverse and contextual responses. We also introduce two new, easy to implement evaluation metrics and provide a large clean conversational dataset. @bose_joey @korymath @ehsk0 @ozaiane Thank you so much Joey :)) Congrats for your 2 papers in Nips 💪 See you in NIPS very soon!",https://arxiv.org/abs/1811.01063,"Sequence-to-Sequence (Seq2Seq) models have witnessed a notable success in generating natural conversational exchanges. Notwithstanding the syntactically well-formed responses generated by these neural network models, they are prone to be acontextual, short and generic. In this work, we introduce a Topical Hierarchical Recurrent Encoder Decoder (THRED), a novel, fully data-driven, multi-turn response generation system intended to produce contextual and topic-aware responses. Our model is built upon the basic Seq2Seq model by augmenting it with a hierarchical joint attention mechanism that incorporates topical concepts and previous interactions into the response generation. To train our model, we provide a clean and high-quality conversational dataset mined from Reddit comments. We evaluate THRED on two novel automated metrics, dubbed Semantic Similarity and Response Echo Index, as well as with human evaluation. Our experiments demonstrate that the proposed model is able to generate more diverse and contextually relevant responses compared to the strong baselines. ","Augmenting Neural Response Generation with Context-Aware Topical |
Attention",2,"['Our paper is now on Arxiv! <LINK> with @korymath, @ehsk0 and @ozaiane. We introduce THRED model which can generate diverse and contextual responses. We also introduce two new, easy to implement evaluation metrics and provide a large clean conversational dataset.', '@bose_joey @korymath @ehsk0 @ozaiane Thank you so much Joey :)) Congrats for your 2 papers in Nips 💪 See you in NIPS very soon!']",18,11,390 |
54,180,1268103502892498945,12745042,RJB Goudie,"New on arXiv: Yang Liu @MRC_BSU’s paper with me on inference for modularised Bayes models when ""cutting"" feedback from a module, due to possible misspecification Proposes a new algorithm for the cut distribution: Stochastic Approximation Cut Algorithm <LINK> <LINK>",https://arxiv.org/abs/2006.01584,"Bayesian modelling enables us to accommodate complex forms of data and make a comprehensive inference, but the effect of partial misspecification of the model is a concern. One approach in this setting is to modularize the model, and prevent feedback from suspect modules, using a cut model. After observing data, this leads to the cut distribution which normally does not have a closed-form. Previous studies have proposed algorithms to sample from this distribution, but these algorithms have unclear theoretical convergence properties. To address this, we propose a new algorithm called the Stochastic Approximation Cut algorithm (SACut) as an alternative. The algorithm is divided into two parallel chains. The main chain targets an approximation to the cut distribution; the auxiliary chain is used to form an adaptive proposal distribution for the main chain. We prove convergence of the samples drawn by the proposed algorithm and present the exact limit. Although SACut is biased, since the main chain does not target the exact cut distribution, we prove this bias can be reduced geometrically by increasing a user-chosen tuning parameter. In addition, parallel computing can be easily adopted for SACut, which greatly reduces computation time. ","Stochastic Approximation Cut Algorithm for Inference in Modularized |
Bayesian Models",1,"['New on arXiv: Yang Liu @MRC_BSU’s paper with me on inference for modularised Bayes models when ""cutting"" feedback from a module, due to possible misspecification\n\nProposes a new algorithm for the cut distribution: Stochastic Approximation Cut Algorithm\n\n<LINK> <LINK>']",20,06,265 |
55,140,1389822601741144066,560668519,Neil Houlsby,"New paper from Brain Zurich and Berlin! We try a conv and attention free vision architecture: MLP-Mixer (<LINK>) Simple is good, so we went as minimalist as possible (just MLPs!) to see whether modern training methods & data is sufficient... <LINK> Mixer consists of MLPs applied in alternation to the image patch embeddings (tokens) and feature channels. With pre-training, Mixer learns really good features, and works well for transfer (e.g. 87.9% ImageNet), sitting alongside even the best CNNs and Vision Transformers! ... Looking at the learned weights can be pretty fun too... <LINK> This continues our team’s work in vision, transfer, scalability, and architectures. As always, pleasure to work with a great team: @tolstikhini , @__kolesnikov__ , @giffmana , @XiaohuaZhai , @TomUnterthiner , @JessicaYung17 , @keysers , @kyosu ,@MarioLucic_ , Alexey Dosovitskiy @karpathy One small, but curious, difference with depthwise conv is parameter tying across channels. Slightly surprising (to me at least) that one can get away with it, but a really useful memory saving due to the large (entire image) receptive field.",https://arxiv.org/abs/2105.01601,"Convolutional Neural Networks (CNNs) are the go-to model for computer vision. Recently, attention-based networks, such as the Vision Transformer, have also become popular. In this paper we show that while convolutions and attention are both sufficient for good performance, neither of them are necessary. We present MLP-Mixer, an architecture based exclusively on multi-layer perceptrons (MLPs). MLP-Mixer contains two types of layers: one with MLPs applied independently to image patches (i.e. ""mixing"" the per-location features), and one with MLPs applied across patches (i.e. ""mixing"" spatial information). When trained on large datasets, or with modern regularization schemes, MLP-Mixer attains competitive scores on image classification benchmarks, with pre-training and inference cost comparable to state-of-the-art models. We hope that these results spark further research beyond the realms of well established CNNs and Transformers. ",MLP-Mixer: An all-MLP Architecture for Vision,5,"['New paper from Brain Zurich and Berlin!\n\nWe try a conv and attention free vision architecture: MLP-Mixer (<LINK>)\n\nSimple is good, so we went as minimalist as possible (just MLPs!) to see whether modern training methods & data is sufficient... <LINK>', 'Mixer consists of MLPs applied in alternation to the image patch embeddings (tokens) and feature channels.\n\nWith pre-training, Mixer learns really good features, and works well for transfer (e.g. 87.9% ImageNet), sitting alongside even the best CNNs and Vision Transformers! ...', 'Looking at the learned weights can be pretty fun too... https://t.co/JxaKH4a3sK', 'This continues our team’s work in vision, transfer, scalability, and architectures. As always, pleasure to work with a great team: @tolstikhini , @__kolesnikov__ , @giffmana , @XiaohuaZhai , @TomUnterthiner , @JessicaYung17 , @keysers , @kyosu ,@MarioLucic_ , Alexey Dosovitskiy', '@karpathy One small, but curious, difference with depthwise conv is parameter tying across channels. Slightly surprising (to me at least) that one can get away with it, but a really useful memory saving due to the large (entire image) receptive field.']",21,05,1120 |
56,39,1331653437252120580,923231130383536128,Eduardo Fonseca,"🔊New paper out! ""Unsupervised Contrastive Learning of Sound Event Representations"". Work done with @Diego_OrtegoH We learn audio representations by contrasting differently augmented views of sound events. paper: <LINK> code (soon): <LINK> (1/5) <LINK> The different views are computed via random patch sampling, mixing of training examples with unrelated backgrounds (mix-back), and other data augmentations. We show that unsupervised contrastive pre-training helps in scenarios of data scarcity and noisy labels! (2/5) Our results suggest that our method is able to learn useful sound event representations even with more limited resources (data and compute) than typically used in previous works. (3/5) Our work is similar to the recent Google's COLA <LINK> by @aaqib_saeed @neilzegh Main differences: we apply augmentations and we use more limited data for train/eval as our compute resources are more modest 😅 Our proposed example mixing strategy (mix-back) bears some similarities to the Mixup-noise just proposed this month here <LINK> (and 5/5)",https://arxiv.org/abs/2011.07616,"Self-supervised representation learning can mitigate the limitations in recognition tasks with few manually labeled data but abundant unlabeled data---a common scenario in sound event research. In this work, we explore unsupervised contrastive learning as a way to learn sound event representations. To this end, we propose to use the pretext task of contrasting differently augmented views of sound events. The views are computed primarily via mixing of training examples with unrelated backgrounds, followed by other data augmentations. We analyze the main components of our method via ablation experiments. We evaluate the learned representations using linear evaluation, and in two in-domain downstream sound event classification tasks, namely, using limited manually labeled data, and using noisy labeled data. Our results suggest that unsupervised contrastive pre-training can mitigate the impact of data scarcity and increase robustness against noisy labels, outperforming supervised baselines. ",Unsupervised Contrastive Learning of Sound Event Representations,5,"['🔊New paper out! ""Unsupervised Contrastive Learning of Sound Event Representations"". Work done with @Diego_OrtegoH\nWe learn audio representations by contrasting differently augmented views of sound events.\npaper: <LINK>\ncode (soon): <LINK>\n(1/5) <LINK>', 'The different views are computed via random patch sampling, mixing of training examples with unrelated backgrounds (mix-back), and other data augmentations.\nWe show that unsupervised contrastive pre-training helps in scenarios of data scarcity and noisy labels! (2/5)', 'Our results suggest that our method is able to learn useful sound event representations even with more limited resources (data and compute) than typically used in previous works. (3/5)', ""Our work is similar to the recent Google's COLA https://t.co/tlzCHMHp2N by @aaqib_saeed @neilzegh\nMain differences: we apply augmentations and we use more limited data for train/eval as our compute resources are more modest 😅"", 'Our proposed example mixing strategy (mix-back) bears some similarities to the Mixup-noise just proposed this month here https://t.co/FSbtqM4k6G\n(and 5/5)']",20,11,1051 |
57,170,1460857303465283589,1093715503762165760,Ilya Valmianski,"1/2 Check out our new extended abstract accepted to #ML4H2021 “Adding more data does not always help: A study in medical conversation summarization with PEGASUS” <LINK> We show empirical limits of PEGASUS fine-tuned to summarize medical dialogue. 2/2 we find that performance saturates as the dataset is scaled to ~1200 examples, while still far below theoretical maximum performance. We also look at several pseudo labeling/active labeling approaches but find that the raw number of examples is the most important factor.",https://arxiv.org/abs/2111.07564,"Medical conversation summarization is integral in capturing information gathered during interactions between patients and physicians. Summarized conversations are used to facilitate patient hand-offs between physicians, and as part of providing care in the future. Summaries, however, can be time-consuming to produce and require domain expertise. Modern pre-trained NLP models such as PEGASUS have emerged as capable alternatives to human summarization, reaching state-of-the-art performance on many summarization benchmarks. However, many downstream tasks still require at least moderately sized datasets to achieve satisfactory performance. In this work we (1) explore the effect of dataset size on transfer learning medical conversation summarization using PEGASUS and (2) evaluate various iterative labeling strategies in the low-data regime, following their success in the classification setting. We find that model performance saturates with increase in dataset size and that the various active-learning strategies evaluated all show equivalent performance consistent with simple dataset size increase. We also find that naive iterative pseudo-labeling is on-par or slightly worse than no pseudo-labeling. Our work sheds light on the successes and challenges of translating low-data regime techniques in classification to medical conversation summarization and helps guides future work in this space. Relevant code available at \url{this https URL}. ","Adding more data does not always help: A study in medical conversation |
summarization with PEGASUS",2,"['1/2 Check out our new extended abstract accepted to #ML4H2021 “Adding more data does not always help: A study in medical conversation summarization with PEGASUS”\n\n<LINK>\n\nWe show empirical limits of PEGASUS fine-tuned to summarize medical dialogue.', '2/2 we find that performance saturates as the dataset is scaled to ~1200 examples, while still far below theoretical maximum performance. We also look at several pseudo labeling/active labeling approaches but find that the raw number of examples is the most important factor.']",21,11,522 |
58,268,1368962213621415936,68538286,Dan Hendrycks,"To find the limits of Transformers, we collected 12,500 math problems. While a three-time IMO gold medalist got 90%, GPT-3 models got ~5%, with accuracy increasing slowly. If trends continue, ML models are far from achieving mathematical reasoning. <LINK> <LINK> @florian_tramer Competition mathematics problems sometimes have jargon that isn't in many K-12 mathematics classes, such as ""triangle orthocenter"" or ""units digit."" Fortunately the training set has 7,500 problems and solutions, which should provide enough background for strong models.",https://arxiv.org/abs/2103.03874,"Many intellectual endeavors require mathematical problem solving, but this skill remains beyond the capabilities of computers. To measure this ability in machine learning models, we introduce MATH, a new dataset of 12,500 challenging competition mathematics problems. Each problem in MATH has a full step-by-step solution which can be used to teach models to generate answer derivations and explanations. To facilitate future research and increase accuracy on MATH, we also contribute a large auxiliary pretraining dataset which helps teach models the fundamentals of mathematics. Even though we are able to increase accuracy on MATH, our results show that accuracy remains relatively low, even with enormous Transformer models. Moreover, we find that simply increasing budgets and model parameter counts will be impractical for achieving strong mathematical reasoning if scaling trends continue. While scaling Transformers is automatically solving most other text-based tasks, scaling is not currently solving MATH. To have more traction on mathematical problem solving we will likely need new algorithmic advancements from the broader research community. ",Measuring Mathematical Problem Solving With the MATH Dataset,2,"['To find the limits of Transformers, we collected 12,500 math problems. While a three-time IMO gold medalist got 90%, GPT-3 models got ~5%, with accuracy increasing slowly.\n\nIf trends continue, ML models are far from achieving mathematical reasoning.\n\n<LINK> <LINK>', '@florian_tramer Competition mathematics problems sometimes have jargon that isn\'t in many K-12 mathematics classes, such as ""triangle orthocenter"" or ""units digit."" Fortunately the training set has 7,500 problems and solutions, which should provide enough background for strong models.']",21,03,548 |
59,15,1323078447787315200,983857052840636417,Rob Corless,"For those interested in AI in math education, @rotmanphilo maybe, please find my new paper with @JamesHDavenport and @euniceyschan and others at <LINK> . We talk about our respective theory and practice; in some ways old-school, the paper is formed by experience.",https://arxiv.org/abs/2010.16300,"Over the past thirty years or so the authors have been teaching various programming for mathematics courses at our respective Universities, as well as incorporating computer algebra and numerical computation into traditional mathematics courses. These activities are, in some important ways, natural precursors to the use of Artificial Intelligence in Mathematics Education. This paper reflects on some of our course designs and experiences and is therefore a mix of theory and practice. Underlying both is a clear recognition of the value of computer programming for mathematics education. We use this theory and practice to suggest good techniques for and to raise questions about the use of AI in Mathematics Education. ",Teaching Programming for Mathematical Scientists,1,"['For those interested in AI in math education, @rotmanphilo maybe, please find my new paper with @JamesHDavenport and @euniceyschan and others at <LINK> . We talk about our respective theory and practice; in some ways old-school, the paper is formed by experience.']",20,10,263 |
60,148,1511260351307821056,1133734070293323776,Hammer Lab ML,"New (wet) paper alert 🥳🌊 on learning and adapting to the #dynamics in #water distribution networks. A #preprint of our work ""SAM-kNN Regressor for Online Learning in Water Distribution Networks"" by Jonathan Jakob and André Artelt now available @arxiv <LINK>",https://arxiv.org/abs/2204.01436,"Water distribution networks are a key component of modern infrastructure for housing and industry. They transport and distribute water via widely branched networks from sources to consumers. In order to guarantee a working network at all times, the water supply company continuously monitors the network and takes actions when necessary -- e.g. reacting to leakages, sensor faults and drops in water quality. Since real world networks are too large and complex to be monitored by a human, algorithmic monitoring systems have been developed. A popular type of such systems are residual based anomaly detection systems that can detect events such as leakages and sensor faults. For a continuous high quality monitoring, it is necessary for these systems to adapt to changed demands and presence of various anomalies. In this work, we propose an adaption of the incremental SAM-kNN classifier for regression to build a residual based anomaly detection system for water distribution networks that is able to adapt to any kind of change. ",SAM-kNN Regressor for Online Learning in Water Distribution Networks,1,"['New (wet) paper alert 🥳🌊\non learning and adapting to the #dynamics in #water distribution networks.\nA #preprint of our work ""SAM-kNN Regressor for Online Learning in Water Distribution Networks"" by Jonathan Jakob and André Artelt now available @arxiv\n<LINK>']",22,04,257 |
End of preview. Expand
in Dataset Viewer.
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
Based on the repository in https://github.com/bnitsan/PaperTweet/
Every entry in the dataset represents a Twitter thread written about a new paper on arXiv, likely by one of the original authors.
license: mit
- Downloads last month
- 220