Title
stringlengths
14
179
Authors
stringlengths
6
464
Abstract
stringlengths
83
1.93k
entry_id
stringlengths
32
34
Date
unknown
Categories
stringlengths
5
168
year
int32
2.01k
2.02k
Coherent Dialogue with Attention-based Language Models
Hongyuan Mei, Mohit Bansal, Matthew R. Walter
We model coherent conversation continuation via RNN-based dialogue models equipped with a dynamic attention mechanism. Our attention-RNN language model dynamically increases the scope of attention on the history as the conversation continues, as opposed to standard attention (or alignment) models with a fixed input scope in a sequence-to-sequence model. This allows each generated word to be associated with the most relevant words in its corresponding conversation history. We evaluate the model on two popular dialogue datasets, the open-domain MovieTriples dataset and the closed-domain Ubuntu Troubleshoot dataset, and achieve significant improvements over the state-of-the-art and baselines on several metrics, including complementary diversity-based metrics, human evaluation, and qualitative visualizations. We also show that a vanilla RNN with dynamic attention outperforms more complex memory models (e.g., LSTM and GRU) by allowing for flexible, long-distance memory. We promote further coherence via topic modeling-based reranking.
http://arxiv.org/abs/1611.06997v1
"2016-11-21T20:25:19Z"
cs.CL, cs.AI
2,016
Ontology Driven Disease Incidence Detection on Twitter
Mark Abraham Magumba, Peter Nabende
In this work we address the issue of generic automated disease incidence monitoring on twitter. We employ an ontology of disease related concepts and use it to obtain a conceptual representation of tweets. Unlike previous key word based systems and topic modeling approaches, our ontological approach allows us to apply more stringent criteria for determining which messages are relevant such as spatial and temporal characteristics whilst giving a stronger guarantee that the resulting models will perform well on new data that may be lexically divergent. We achieve this by training learners on concepts rather than individual words. For training we use a dataset containing mentions of influenza and Listeria and use the learned models to classify datasets containing mentions of an arbitrary selection of other diseases. We show that our ontological approach achieves good performance on this task using a variety of Natural Language Processing Techniques. We also show that word vectors can be learned directly from our concepts to achieve even better results.
http://arxiv.org/abs/1611.06671v1
"2016-11-21T07:32:56Z"
cs.CL, cs.IR
2,016
Anchor-Free Correlated Topic Modeling: Identifiability and Algorithm
Kejun Huang, Xiao Fu, Nicholas D. Sidiropoulos
In topic modeling, many algorithms that guarantee identifiability of the topics have been developed under the premise that there exist anchor words -- i.e., words that only appear (with positive probability) in one topic. Follow-up work has resorted to three or higher-order statistics of the data corpus to relax the anchor word assumption. Reliable estimates of higher-order statistics are hard to obtain, however, and the identification of topics under those models hinges on uncorrelatedness of the topics, which can be unrealistic. This paper revisits topic modeling based on second-order moments, and proposes an anchor-free topic mining framework. The proposed approach guarantees the identification of the topics under a much milder condition compared to the anchor-word assumption, thereby exhibiting much better robustness in practice. The associated algorithm only involves one eigen-decomposition and a few small linear programs. This makes it easy to implement and scale up to very large problem instances. Experiments using the TDT2 and Reuters-21578 corpus demonstrate that the proposed anchor-free approach exhibits very favorable performance (measured using coherence, similarity count, and clustering accuracy metrics) compared to the prior art.
http://arxiv.org/abs/1611.05010v1
"2016-11-15T20:06:40Z"
stat.ML, cs.CL, cs.IR, cs.SI
2,016
`Who would have thought of that!': A Hierarchical Topic Model for Extraction of Sarcasm-prevalent Topics and Sarcasm Detection
Aditya Joshi, Prayas Jain, Pushpak Bhattacharyya, Mark Carman
Topic Models have been reported to be beneficial for aspect-based sentiment analysis. This paper reports a simple topic model for sarcasm detection, a first, to the best of our knowledge. Designed on the basis of the intuition that sarcastic tweets are likely to have a mixture of words of both sentiments as against tweets with literal sentiment (either positive or negative), our hierarchical topic model discovers sarcasm-prevalent topics and topic-level sentiment. Using a dataset of tweets labeled using hashtags, the model estimates topic-level, and sentiment-level distributions. Our evaluation shows that topics such as `work', `gun laws', `weather' are sarcasm-prevalent topics. Our model is also able to discover the mixture of sentiment-bearing words that exist in a text of a given sentiment-related label. Finally, we apply our model to predict sarcasm in tweets. We outperform two prior work based on statistical classifiers with specific features, by around 25\%.
http://arxiv.org/abs/1611.04326v2
"2016-11-14T10:40:44Z"
cs.CL
2,016
TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency
Adji B. Dieng, Chong Wang, Jianfeng Gao, John Paisley
In this paper, we propose TopicRNN, a recurrent neural network (RNN)-based language model designed to directly capture the global semantic meaning relating words in a document via latent topics. Because of their sequential nature, RNNs are good at capturing the local structure of a word sequence - both semantic and syntactic - but might face difficulty remembering long-range dependencies. Intuitively, these long-range dependencies are of semantic nature. In contrast, latent topic models are able to capture the global underlying semantic structure of a document but do not account for word ordering. The proposed TopicRNN model integrates the merits of RNNs and latent topic models: it captures local (syntactic) dependencies using an RNN and global (semantic) dependencies using latent topics. Unlike previous work on contextual RNN language modeling, our model is learned end-to-end. Empirical results on word prediction show that TopicRNN outperforms existing contextual RNN baselines. In addition, TopicRNN can be used as an unsupervised feature extractor for documents. We do this for sentiment analysis on the IMDB movie review dataset and report an error rate of $6.28\%$. This is comparable to the state-of-the-art $5.91\%$ resulting from a semi-supervised approach. Finally, TopicRNN also yields sensible topics, making it a useful alternative to document models such as latent Dirichlet allocation.
http://arxiv.org/abs/1611.01702v2
"2016-11-05T21:25:07Z"
cs.CL, cs.AI, cs.LG, stat.ML
2,016
Generalized Topic Modeling
Avrim Blum, Nika Haghtalab
Recently there has been significant activity in developing algorithms with provable guarantees for topic modeling. In standard topic models, a topic (such as sports, business, or politics) is viewed as a probability distribution $\vec a_i$ over words, and a document is generated by first selecting a mixture $\vec w$ over topics, and then generating words i.i.d. from the associated mixture $A{\vec w}$. Given a large collection of such documents, the goal is to recover the topic vectors and then to correctly classify new documents according to their topic mixture. In this work we consider a broad generalization of this framework in which words are no longer assumed to be drawn i.i.d. and instead a topic is a complex distribution over sequences of paragraphs. Since one could not hope to even represent such a distribution in general (even if paragraphs are given using some natural feature representation), we aim instead to directly learn a document classifier. That is, we aim to learn a predictor that given a new document, accurately predicts its topic mixture, without learning the distributions explicitly. We present several natural conditions under which one can do this efficiently and discuss issues such as noise tolerance and sample complexity in this model. More generally, our model can be viewed as a generalization of the multi-view or co-training setting in machine learning.
http://arxiv.org/abs/1611.01259v1
"2016-11-04T03:45:03Z"
cs.LG, cs.CL, cs.DS, cs.IR
2,016
A Decision Support System for Inbound Marketers: An Empirical Use of Latent Dirichlet Allocation Topic Model to Guide Infographic Designers
Meisam Hejazi Nia
Infographic is a type of information presentation that inbound marketers use. I suggest a method that can allow the infographic designers to benchmark their design against the previous viral infographics to measure whether a given design decision can help or hurt the probability of the design becoming viral.
http://arxiv.org/abs/1611.00872v1
"2016-11-03T03:53:25Z"
cs.HC, cs.IR
2,016
Learning Methods for Dynamic Topic Modeling in Automated Behaviour Analysis
Olga Isupova, Danil Kuzin, Lyudmila Mihaylova
Semi-supervised and unsupervised systems provide operators with invaluable support and can tremendously reduce the operators load. In the light of the necessity to process large volumes of video data and provide autonomous decisions, this work proposes new learning algorithms for activity analysis in video. The activities and behaviours are described by a dynamic topic model. Two novel learning algorithms based on the expectation maximisation approach and variational Bayes inference are proposed. Theoretical derivations of the posterior of model parameters are given. The designed learning algorithms are compared with the Gibbs sampling inference scheme introduced earlier in the literature. A detailed comparison of the learning algorithms is presented on real video data. We also propose an anomaly localisation procedure, elegantly embedded in the topic modeling framework. The proposed framework can be applied to a number of areas, including transportation systems, security and surveillance.
http://arxiv.org/abs/1611.00565v2
"2016-11-02T12:08:41Z"
stat.ML
2,016
Dynamic matrix recovery from incomplete observations under an exact low-rank constraint
Liangbei Xu, Mark A. Davenport
Low-rank matrix factorizations arise in a wide variety of applications -- including recommendation systems, topic models, and source separation, to name just a few. In these and many other applications, it has been widely noted that by incorporating temporal information and allowing for the possibility of time-varying models, significant improvements are possible in practice. However, despite the reported superior empirical performance of these dynamic models over their static counterparts, there is limited theoretical justification for introducing these more complex models. In this paper we aim to address this gap by studying the problem of recovering a dynamically evolving low-rank matrix from incomplete observations. First, we propose the locally weighted matrix smoothing (LOWEMS) framework as one possible approach to dynamic matrix recovery. We then establish error bounds for LOWEMS in both the {\em matrix sensing} and {\em matrix completion} observation models. Our results quantify the potential benefits of exploiting dynamic constraints both in terms of recovery accuracy and sample complexity. To illustrate these benefits we provide both synthetic and real-world experimental results.
http://arxiv.org/abs/1610.09420v1
"2016-10-28T22:44:29Z"
stat.ML, cs.LG
2,016
Integrating Topic Models and Latent Factors for Recommendation
Danis J. Wilson, Wei Zhang
Nowadays, we have large amounts of online items in various web-based applications, which makes it an important task to build effective personalized recommender systems so as to save users' efforts in information seeking. One of the most extensively and successfully used methods for personalized recommendation is the Collaborative Filtering (CF) technique, which makes recommendation based on users' historical choices as well as those of the others'. The most popular CF method, like Latent Factor Model (LFM), is to model how users evaluate items by understanding the hidden dimension or factors of their opinions. How to model these hidden factors is key to improve the performance of recommender system. In this work, we consider the problem of hotel recommendation for travel planning services by integrating the location information and the user's preference for recommendation. The intuition is that user preferences may change dynamically over different locations, thus treating the historical decisions of a user as static or universally applicable can be infeasible in real-world applications. For example, users may prefer chain brand hotels with standard configurations when traveling for business, while they may prefer unique local hotels when traveling for entertainment. In this paper, we aim to provide trip-level personalization for users in recommendation.
http://arxiv.org/abs/1610.09077v3
"2016-10-28T04:20:54Z"
cs.IR, cs.AI
2,016
Scalable Dynamic Topic Modeling with Clustered Latent Dirichlet Allocation (CLDA)
Chris Gropp, Alexander Herzog, Ilya Safro, Paul W. Wilson, Amy W. Apon
Topic modeling, a method for extracting the underlying themes from a collection of documents, is an increasingly important component of the design of intelligent systems enabling the sense-making of highly dynamic and diverse streams of text data. Traditional methods such as Dynamic Topic Modeling (DTM) do not lend themselves well to direct parallelization because of dependencies from one time step to another. In this paper, we introduce and empirically analyze Clustered Latent Dirichlet Allocation (CLDA), a method for extracting dynamic latent topics from a collection of documents. Our approach is based on data decomposition in which the data is partitioned into segments, followed by topic modeling on the individual segments. The resulting local models are then combined into a global solution using clustering. The decomposition and resulting parallelization leads to very fast runtime even on very large datasets. Our approach furthermore provides insight into how the composition of topics changes over time and can also be applied using other data partitioning strategies over any discrete features of the data, such as geographic features or classes of users. In this paper CLDA is applied successfully to seventeen years of NIPS conference papers (2,484 documents and 3,280,697 words), seventeen years of computer science journal abstracts (533,560 documents and 32,551,540 words), and to forty years of the PubMed corpus (4,025,978 documents and 273,853,980 words).
http://arxiv.org/abs/1610.07703v3
"2016-10-25T01:50:24Z"
cs.IR, stat.ML
2,016
CuMF_SGD: Fast and Scalable Matrix Factorization
Xiaolong Xie, Wei Tan, Liana L. Fong, Yun Liang
Matrix factorization (MF) has been widely used in e.g., recommender systems, topic modeling and word embedding. Stochastic gradient descent (SGD) is popular in solving MF problems because it can deal with large data sets and is easy to do incremental learning. We observed that SGD for MF is memory bound. Meanwhile, single-node CPU systems with caching performs well only for small data sets; distributed systems have higher aggregated memory bandwidth but suffer from relatively slow network connection. This observation inspires us to accelerate MF by utilizing GPUs's high memory bandwidth and fast intra-node connection. We present cuMF_SGD, a CUDA-based SGD solution for large-scale MF problems. On a single CPU, we design two workload schedule schemes, i.e., batch-Hogwild! and wavefront-update that fully exploit the massive amount of cores. Especially, batch-Hogwild! as a vectorized version of Hogwild! overcomes the issue of memory discontinuity. We also develop highly-optimized kernels for SGD update, leveraging cache, warp-shuffle instructions and half-precision floats. We also design a partition scheme to utilize multiple GPUs while addressing the well-known convergence issue when parallelizing SGD. On three data sets with only one Maxwell or Pascal GPU, cuMF_SGD runs 3.1X-28.2X as fast compared with state-of-art CPU solutions on 1-64 CPU nodes. Evaluations also show that cuMF_SGD scales well on multiple GPUs in large data sets.
http://arxiv.org/abs/1610.05838v3
"2016-10-19T01:28:11Z"
cs.LG, cs.NA
2,016
Modeling community structure and topics in dynamic text networks
Teague Henry, David Banks, Christine Chai, Derek Owens-Oas
The last decade has seen great progress in both dynamic network modeling and topic modeling. This paper draws upon both areas to create a Bayesian method that allows topic discovery to inform the latent network model and the network structure to facilitate topic identification. We apply this method to the 467 top political blogs of 2012. Our results find complex community structure within this set of blogs, where community membership depends strongly upon the set of topics in which the blogger is interested.
http://arxiv.org/abs/1610.05756v2
"2016-10-18T19:39:44Z"
cs.SI, physics.soc-ph, stat.ML
2,016
SaberLDA: Sparsity-Aware Learning of Topic Models on GPUs
Kaiwei Li, Jianfei Chen, Wenguang Chen, Jun Zhu
Latent Dirichlet Allocation (LDA) is a popular tool for analyzing discrete count data such as text and images. Applications require LDA to handle both large datasets and a large number of topics. Though distributed CPU systems have been used, GPU-based systems have emerged as a promising alternative because of the high computational power and memory bandwidth of GPUs. However, existing GPU-based LDA systems cannot support a large number of topics because they use algorithms on dense data structures whose time and space complexity is linear to the number of topics. In this paper, we propose SaberLDA, a GPU-based LDA system that implements a sparsity-aware algorithm to achieve sublinear time complexity and scales well to learn a large number of topics. To address the challenges introduced by sparsity, we propose a novel data layout, a new warp-based sampling kernel, and an efficient sparse count matrix updating algorithm that improves locality, makes efficient utilization of GPU warps, and reduces memory consumption. Experiments show that SaberLDA can learn from billions-token-scale data with up to 10,000 topics, which is almost two orders of magnitude larger than that of the previous GPU-based systems. With a single GPU card, SaberLDA is able to learn 10,000 topics from a dataset of billions of tokens in a few hours, which is only achievable with clusters with tens of machines before.
http://arxiv.org/abs/1610.02496v2
"2016-10-08T07:57:00Z"
cs.DC, cs.IR, cs.LG, stat.ML
2,016
Decentralized Topic Modelling with Latent Dirichlet Allocation
Igor Colin, Christophe Dupuy
Privacy preserving networks can be modelled as decentralized networks (e.g., sensors, connected objects, smartphones), where communication between nodes of the network is not controlled by an all-knowing, central node. For this type of networks, the main issue is to gather/learn global information on the network (e.g., by optimizing a global cost function) while keeping the (sensitive) information at each node. In this work, we focus on text information that agents do not want to share (e.g., text messages, emails, confidential reports). We use recent advances on decentralized optimization and topic models to infer topics from a graph with limited communication. We propose a method to adapt latent Dirichlet allocation (LDA) model to decentralized optimization and show on synthetic data that we still recover similar parameters and similar performance at each node than with stochastic methods accessing to the whole information in the graph.
http://arxiv.org/abs/1610.01417v1
"2016-10-05T13:45:53Z"
stat.ML, cs.LG
2,016
The Search Problem in Mixture Models
Avik Ray, Joe Neeman, Sujay Sanghavi, Sanjay Shakkottai
We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.
http://arxiv.org/abs/1610.00843v2
"2016-10-04T05:01:18Z"
stat.ML, cs.LG
2,016
Topic Browsing for Research Papers with Hierarchical Latent Tree Analysis
Leonard K. M. Poon, Nevin L. Zhang
Academic researchers often need to face with a large collection of research papers in the literature. This problem may be even worse for postgraduate students who are new to a field and may not know where to start. To address this problem, we have developed an online catalog of research papers where the papers have been automatically categorized by a topic model. The catalog contains 7719 papers from the proceedings of two artificial intelligence conferences from 2000 to 2015. Rather than the commonly used Latent Dirichlet Allocation, we use a recently proposed method called hierarchical latent tree analysis for topic modeling. The resulting topic model contains a hierarchy of topics so that users can browse the topics from the top level to the bottom level. The topic model contains a manageable number of general topics at the top level and allows thousands of fine-grained topics at the bottom level. It also can detect topics that have emerged recently.
http://arxiv.org/abs/1609.09188v1
"2016-09-29T03:22:01Z"
cs.CL, cs.IR, cs.LG
2,016
MPI-FAUN: An MPI-Based Framework for Alternating-Updating Nonnegative Matrix Factorization
Ramakrishnan Kannan, Grey Ballard, Haesun Park
Non-negative matrix factorization (NMF) is the problem of determining two non-negative low rank factors $W$ and $H$, for the given input matrix $A$, such that $A \approx W H$. NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient parallel algorithms to solve the problem for big data sets. The main contribution of this work is a new, high-performance parallel computational framework for a broad class of NMF algorithms that iteratively solves alternating non-negative least squares (NLS) subproblems for $W$ and $H$. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). The framework is flexible and able to leverage a variety of NMF and NLS algorithms, including Multiplicative Update, Hierarchical Alternating Least Squares, and Block Principal Pivoting. Our implementation allows us to benchmark and compare different algorithms on massive dense and sparse data matrices of size that spans for few hundreds of millions to billions. We demonstrate the scalability of our algorithm and compare it with baseline implementations, showing significant performance improvements. The code and the datasets used for conducting the experiments are available online.
http://arxiv.org/abs/1609.09154v1
"2016-09-28T23:31:45Z"
cs.DC, cs.NA, stat.ML
2,016
Topic Modeling over Short Texts by Incorporating Word Embeddings
Jipeng Qiang, Ping Chen, Tong Wang, Xindong Wu
Inferring topics from the overwhelming amount of short texts becomes a critical but challenging task for many content analysis tasks, such as content charactering, user interest profiling, and emerging topic detecting. Existing methods such as probabilistic latent semantic analysis (PLSA) and latent Dirichlet allocation (LDA) cannot solve this prob- lem very well since only very limited word co-occurrence information is available in short texts. This paper studies how to incorporate the external word correlation knowledge into short texts to improve the coherence of topic modeling. Based on recent results in word embeddings that learn se- mantically representations for words from a large corpus, we introduce a novel method, Embedding-based Topic Model (ETM), to learn latent topics from short texts. ETM not only solves the problem of very limited word co-occurrence information by aggregating short texts into long pseudo- texts, but also utilizes a Markov Random Field regularized model that gives correlated words a better chance to be put into the same topic. The experiments on real-world datasets validate the effectiveness of our model comparing with the state-of-the-art models.
http://arxiv.org/abs/1609.08496v1
"2016-09-27T15:26:07Z"
cs.CL, cs.IR, cs.LG
2,016
Fast Learning of Clusters and Topics via Sparse Posteriors
Michael C. Hughes, Erik B. Sudderth
Mixture models and topic models generate each observation from a single cluster, but standard variational posteriors for each observation assign positive probability to all possible clusters. This requires dense storage and runtime costs that scale with the total number of clusters, even though typically only a few clusters have significant posterior mass for any data point. We propose a constrained family of sparse variational distributions that allow at most $L$ non-zero entries, where the tunable threshold $L$ trades off speed for accuracy. Previous sparse approximations have used hard assignments ($L=1$), but we find that moderate values of $L>1$ provide superior performance. Our approach easily integrates with stochastic or incremental optimization algorithms to scale to millions of examples. Experiments training mixture models of image patches and topic models for news articles show that our approach produces better-quality models in far less time than baseline methods.
http://arxiv.org/abs/1609.07521v1
"2016-09-23T21:18:31Z"
stat.ML, cs.AI, cs.LG
2,016
Bibliographic Analysis with the Citation Network Topic Model
Kar Wai Lim, Wray Buntine
Bibliographic analysis considers author's research areas, the citation network and paper content among other things. In this paper, we combine these three in a topic model that produces a bibliographic model of authors, topics and documents using a non-parametric extension of a combination of the Poisson mixed-topic link model and the author-topic model. We propose a novel and efficient inference algorithm for the model to explore subsets of research publications from CiteSeerX. Our model demonstrates improved performance in both model fitting and a clustering task compared to several baselines.
http://arxiv.org/abs/1609.06826v1
"2016-09-22T05:46:46Z"
cs.DL, cs.LG, stat.ML
2,016
Twitter-Network Topic Model: A Full Bayesian Treatment for Social Network and Text Modeling
Kar Wai Lim, Changyou Chen, Wray Buntine
Twitter data is extremely noisy -- each tweet is short, unstructured and with informal language, a challenge for current topic modeling. On the other hand, tweets are accompanied by extra information such as authorship, hashtags and the user-follower network. Exploiting this additional information, we propose the Twitter-Network (TN) topic model to jointly model the text and the social network in a full Bayesian nonparametric way. The TN topic model employs the hierarchical Poisson-Dirichlet processes (PDP) for text modeling and a Gaussian process random function model for social network modeling. We show that the TN topic model significantly outperforms several existing nonparametric models due to its flexibility. Moreover, the TN topic model enables additional informative inference such as authors' interests, hashtag analysis, as well as leading to further applications such as author recommendation, automatic topic labeling and hashtag suggestion. Note our general inference framework can readily be applied to other topic models with embedded PDP nodes.
http://arxiv.org/abs/1609.06791v1
"2016-09-22T01:08:31Z"
cs.CL, cs.IR, cs.SI
2,016
Nonparametric Bayesian Topic Modelling with the Hierarchical Pitman-Yor Processes
Kar Wai Lim, Wray Buntine, Changyou Chen, Lan Du
The Dirichlet process and its extension, the Pitman-Yor process, are stochastic processes that take probability distributions as a parameter. These processes can be stacked up to form a hierarchical nonparametric Bayesian model. In this article, we present efficient methods for the use of these processes in this hierarchical context, and apply them to latent variable models for text analytics. In particular, we propose a general framework for designing these Bayesian models, which are called topic models in the computer science community. We then propose a specific nonparametric Bayesian topic model for modelling text from social media. We focus on tweets (posts on Twitter) in this article due to their ease of access. We find that our nonparametric model performs better than existing parametric models in both goodness of fit and real world applications.
http://arxiv.org/abs/1609.06783v1
"2016-09-22T00:10:16Z"
stat.ML, cs.CL, cs.LG
2,016
Twitter Opinion Topic Model: Extracting Product Opinions from Tweets by Leveraging Hashtags and Sentiment Lexicon
Kar Wai Lim, Wray Buntine
Aspect-based opinion mining is widely applied to review data to aggregate or summarize opinions of a product, and the current state-of-the-art is achieved with Latent Dirichlet Allocation (LDA)-based model. Although social media data like tweets are laden with opinions, their "dirty" nature (as natural language) has discouraged researchers from applying LDA-based opinion model for product review mining. Tweets are often informal, unstructured and lacking labeled data such as categories and ratings, making it challenging for product opinion mining. In this paper, we propose an LDA-based opinion model named Twitter Opinion Topic Model (TOTM) for opinion mining and sentiment analysis. TOTM leverages hashtags, mentions, emoticons and strong sentiment words that are present in tweets in its discovery process. It improves opinion prediction by modeling the target-opinion interaction directly, thus discovering target specific opinion words, neglected in existing approaches. Moreover, we propose a new formulation of incorporating sentiment prior information into a topic model, by utilizing an existing public sentiment lexicon. This is novel in that it learns and updates with the data. We conduct experiments on 9 million tweets on electronic products, and demonstrate the improved performance of TOTM in both quantitative evaluations and qualitative analysis. We show that aspect-based opinion analysis on massive volume of tweets provides useful opinions on products.
http://arxiv.org/abs/1609.06578v1
"2016-09-21T14:25:23Z"
cs.CL, cs.IR, cs.LG
2,016
Bibliographic Analysis on Research Publications using Authors, Categorical Labels and the Citation Network
Kar Wai Lim, Wray Buntine
Bibliographic analysis considers the author's research areas, the citation network and the paper content among other things. In this paper, we combine these three in a topic model that produces a bibliographic model of authors, topics and documents, using a nonparametric extension of a combination of the Poisson mixed-topic link model and the author-topic model. This gives rise to the Citation Network Topic Model (CNTM). We propose a novel and efficient inference algorithm for the CNTM to explore subsets of research publications from CiteSeerX. The publication datasets are organised into three corpora, totalling to about 168k publications with about 62k authors. The queried datasets are made available online. In three publicly available corpora in addition to the queried datasets, our proposed model demonstrates an improved performance in both model fitting and document clustering, compared to several baselines. Moreover, our model allows extraction of additional useful knowledge from the corpora, such as the visualisation of the author-topics network. Additionally, we propose a simple method to incorporate supervision into topic modelling to achieve further improvement on the clustering task.
http://arxiv.org/abs/1609.06532v1
"2016-09-21T12:44:37Z"
cs.DL, cs.LG, stat.ML
2,016
Stochastic Matrix Factorization
Christopher Adams
This paper considers a restriction to non-negative matrix factorization in which at least one matrix factor is stochastic. That is, the elements of the matrix factors are non-negative and the columns of one matrix factor sum to 1. This restriction includes topic models, a popular method for analyzing unstructured data. It also includes a method for storing and finding pictures. The paper presents necessary and sufficient conditions on the observed data such that the factorization is unique. In addition, the paper characterizes natural bounds on the parameters for any observed data and presents a consistent least squares estimator. The results are illustrated using a topic model analysis of PhD abstracts in economics and the problem of storing and retrieving a set of pictures of faces.
http://arxiv.org/abs/1609.05772v1
"2016-09-19T15:19:44Z"
stat.ML, cs.LG
2,016
Private Topic Modeling
Mijung Park, James Foulds, Kamalika Chaudhuri, Max Welling
We develop a privatised stochastic variational inference method for Latent Dirichlet Allocation (LDA). The iterative nature of stochastic variational inference presents challenges: multiple iterations are required to obtain accurate posterior distributions, yet each iteration increases the amount of noise that must be added to achieve a reasonable degree of privacy. We propose a practical algorithm that overcomes this challenge by combining: (1) an improved composition method for differential privacy, called the moments accountant, which provides a tight bound on the privacy cost of multiple variational inference iterations and thus significantly decreases the amount of additive noise; and (2) privacy amplification resulting from subsampling of large-scale data. Focusing on conjugate exponential family models, in our private variational inference, all the posterior distributions will be privatised by simply perturbing expected sufficient statistics. Using Wikipedia data, we illustrate the effectiveness of our algorithm for large-scale data.
http://arxiv.org/abs/1609.04120v3
"2016-09-14T03:18:36Z"
stat.ML, cs.CR
2,016
Detecting Singleton Review Spammers Using Semantic Similarity
Vlad Sandulescu, Martin Ester
Online reviews have increasingly become a very important resource for consumers when making purchases. Though it is becoming more and more difficult for people to make well-informed buying decisions without being deceived by fake reviews. Prior works on the opinion spam problem mostly considered classifying fake reviews using behavioral user patterns. They focused on prolific users who write more than a couple of reviews, discarding one-time reviewers. The number of singleton reviewers however is expected to be high for many review websites. While behavioral patterns are effective when dealing with elite users, for one-time reviewers, the review text needs to be exploited. In this paper we tackle the problem of detecting fake reviews written by the same person using multiple names, posting each review under a different name. We propose two methods to detect similar reviews and show the results generally outperform the vectorial similarity measures used in prior works. The first method extends the semantic similarity between words to the reviews level. The second method is based on topic modeling and exploits the similarity of the reviews topic distributions using two models: bag-of-words and bag-of-opinion-phrases. The experiments were conducted on reviews from three different datasets: Yelp (57K reviews), Trustpilot (9K reviews) and Ott dataset (800 reviews).
http://arxiv.org/abs/1609.02727v1
"2016-09-09T09:58:45Z"
cs.CL, cs.LG, I.7.0; J.4
2,016
Ask the GRU: Multi-Task Learning for Deep Text Recommendations
Trapit Bansal, David Belanger, Andrew McCallum
In a variety of application domains the content to be recommended to users is associated with text. This includes research papers, movies with associated plot summaries, news articles, blog posts, etc. Recommendation approaches based on latent factor models can be extended naturally to leverage text by employing an explicit mapping from text to factors. This enables recommendations for new, unseen content, and may generalize better, since the factors for all items are produced by a compactly-parametrized model. Previous work has used topic models or averages of word embeddings for this mapping. In this paper we present a method leveraging deep recurrent neural networks to encode the text sequence into a latent vector, specifically gated recurrent units (GRUs) trained end-to-end on the collaborative filtering task. For the task of scientific paper recommendation, this yields models with significantly higher accuracy. In cold-start scenarios, we beat the previous state-of-the-art, all of which ignore word order. Performance is further improved by multi-task learning, where the text encoder network is trained for a combination of content recommendation and item metadata prediction. This regularizes the collaborative filtering model, ameliorating the problem of sparsity of the observed rating matrix.
http://arxiv.org/abs/1609.02116v2
"2016-09-07T19:05:42Z"
stat.ML, cs.CL, cs.LG, I.2.7; I.2.6
2,016
What is Wrong with Topic Modeling? (and How to Fix it Using Search-based Software Engineering)
Amritanshu Agrawal, Wei Fu, Tim Menzies
Context: Topic modeling finds human-readable structures in unstructured textual data. A widely used topic modeler is Latent Dirichlet allocation. When run on different datasets, LDA suffers from "order effects" i.e. different topics are generated if the order of training data is shuffled. Such order effects introduce a systematic error for any study. This error can relate to misleading results;specifically, inaccurate topic descriptions and a reduction in the efficacy of text mining classification results. Objective: To provide a method in which distributions generated by LDA are more stable and can be used for further analysis. Method: We use LDADE, a search-based software engineering tool that tunes LDA's parameters using DE (Differential Evolution). LDADE is evaluated on data from a programmer information exchange site (Stackoverflow), title and abstract text of thousands ofSoftware Engineering (SE) papers, and software defect reports from NASA. Results were collected across different implementations of LDA (Python+Scikit-Learn, Scala+Spark); across different platforms (Linux, Macintosh) and for different kinds of LDAs (VEM,or using Gibbs sampling). Results were scored via topic stability and text mining classification accuracy. Results: In all treatments: (i) standard LDA exhibits very large topic instability; (ii) LDADE's tunings dramatically reduce cluster instability; (iii) LDADE also leads to improved performances for supervised as well as unsupervised learning. Conclusion: Due to topic instability, using standard LDA with its "off-the-shelf" settings should now be depreciated. Also, in future, we should require SE papers that use LDA to test and (if needed) mitigate LDA topic instability. Finally, LDADE is a candidate technology for effectively and efficiently reducing that instability.
http://arxiv.org/abs/1608.08176v4
"2016-08-29T18:45:00Z"
cs.SE, cs.AI, cs.CL, cs.IR
2,016
Finding Trends in Software Research
George Mathew, Amritanshu Agrawal, Tim Menzies
This paper explores the structure of research papers in software engineering. Using text mining, we study 35,391 software engineering (SE) papers from 34 leading SE venues over the last 25 years. These venues were divided, nearly evenly, between conferences and journals. An important aspect of this analysis is that it is fully automated and repeatable. To achieve that automation, we used a stable topic modeling technique called LDADE that fully automates parameter tuning in LDA. Using LDADE, we mine 11 topics that represent much of the structure of contemporary SE. The 11 topics presented here should not be "set in stone" as the only topics worthy of study in SE. Rather our goal is to report that (a) text mining methods can detect large scale trends within our community; (b) those topic change with time; so (c) it is important to have automatic agents that can update our understanding of our community whenever new data arrives.
http://arxiv.org/abs/1608.08100v10
"2016-08-29T15:17:09Z"
cs.SE
2,016
Towards Bayesian Deep Learning: A Framework and Some Existing Methods
Hao Wang, Dit-Yan Yeung
While perception tasks such as visual object recognition and text understanding play an important role in human intelligence, the subsequent tasks that involve inference, reasoning and planning require an even higher level of intelligence. The past few years have seen major advances in many perception tasks using deep learning models. For higher-level inference, however, probabilistic graphical models with their Bayesian nature are still more powerful and flexible. To achieve integrated intelligence that involves both perception and inference, it is naturally desirable to tightly integrate deep learning and Bayesian models within a principled probabilistic framework, which we call Bayesian deep learning. In this unified framework, the perception of text or images using deep learning can boost the performance of higher-level inference and in return, the feedback from the inference process is able to enhance the perception of text or images. This paper proposes a general framework for Bayesian deep learning and reviews its recent applications on recommender systems, topic models, and control. In this paper, we also discuss the relationship and differences between Bayesian deep learning and other related topics like Bayesian treatment of neural networks.
http://arxiv.org/abs/1608.06884v2
"2016-08-24T16:15:22Z"
stat.ML, cs.CV, cs.LG, cs.NE
2,016
A Geometrical Approach to Topic Model Estimation
Zheng Tracy Ke
In the probabilistic topic models, the quantity of interest---a low-rank matrix consisting of topic vectors---is hidden in the text corpus matrix, masked by noise, and the Singular Value Decomposition (SVD) is a potentially useful tool for learning such a low-rank matrix. However, the connection between this low-rank matrix and the singular vectors of the text corpus matrix are usually complicated and hard to spell out, so how to use SVD for learning topic models faces challenges. In this paper, we overcome the challenge by revealing a surprising insight: there is a low-dimensional simplex structure which can be viewed as a bridge between the low-rank matrix of interest and the SVD of the text corpus matrix, and allows us to conveniently reconstruct the former using the latter. Such an insight motivates a new SVD approach to learning topic models, which we analyze with delicate random matrix theory and derive the rate of convergence. We support our methods and theory numerically, using both simulated data and real data.
http://arxiv.org/abs/1608.04478v1
"2016-08-16T04:31:52Z"
stat.ME, cs.LG, stat.ML
2,016
Viewpoint and Topic Modeling of Current Events
Kerry Zhang, Jussi Karlgren, Cheng Zhang, Jens Lagergren
There are multiple sides to every story, and while statistical topic models have been highly successful at topically summarizing the stories in corpora of text documents, they do not explicitly address the issue of learning the different sides, the viewpoints, expressed in the documents. In this paper, we show how these viewpoints can be learned completely unsupervised and represented in a human interpretable form. We use a novel approach of applying CorrLDA2 for this purpose, which learns topic-viewpoint relations that can be used to form groups of topics, where each group represents a viewpoint. A corpus of documents about the Israeli-Palestinian conflict is then used to demonstrate how a Palestinian and an Israeli viewpoint can be learned. By leveraging the magnitudes and signs of the feature weights of a linear SVM, we introduce a principled method to evaluate associations between topics and viewpoints. With this, we demonstrate, both quantitatively and qualitatively, that the learned topic groups are contextually coherent, and form consistently correct topic-viewpoint associations.
http://arxiv.org/abs/1608.04089v1
"2016-08-14T11:36:52Z"
cs.CL, cs.IR, stat.ML
2,016
An Analysis of Lemmatization on Topic Models of Morphologically Rich Language
Chandler May, Ryan Cotterell, Benjamin Van Durme
Topic models are typically represented by top-$m$ word lists for human interpretation. The corpus is often pre-processed with lemmatization (or stemming) so that those representations are not undermined by a proliferation of words with similar meanings, but there is little public work on the effects of that pre-processing. Recent work studied the effect of stemming on topic models of English texts and found no supporting evidence for the practice. We study the effect of lemmatization on topic models of Russian Wikipedia articles, finding in one configuration that it significantly improves interpretability according to a word intrusion metric. We conclude that lemmatization may benefit topic models on morphologically rich languages, but that further investigation is needed.
http://arxiv.org/abs/1608.03995v2
"2016-08-13T15:54:10Z"
cs.CL
2,016
Sex, drugs, and violence
Stefania Raimondo, Frank Rudzicz
Automatically detecting inappropriate content can be a difficult NLP task, requiring understanding context and innuendo, not just identifying specific keywords. Due to the large quantity of online user-generated content, automatic detection is becoming increasingly necessary. We take a largely unsupervised approach using a large corpus of narratives from a community-based self-publishing website and a small segment of crowd-sourced annotations. We explore topic modelling using latent Dirichlet allocation (and a variation), and use these to regress appropriateness ratings, effectively automating rating for suitability. The results suggest that certain topics inferred may be useful in detecting latent inappropriateness -- yielding recall up to 96% and low regression errors.
http://arxiv.org/abs/1608.03448v1
"2016-08-11T13:10:02Z"
cs.CL
2,016
Topic Modelling and Event Identification from Twitter Textual Data
Marina Sokolova, Kanyi Huang, Stan Matwin, Joshua Ramisch, Vera Sazonova, Renee Black, Chris Orwa, Sidney Ochieng, Nanjira Sambuli
The tremendous growth of social media content on the Internet has inspired the development of the text analytics to understand and solve real-life problems. Leveraging statistical topic modelling helps researchers and practitioners in better comprehension of textual content as well as provides useful information for further analysis. Statistical topic modelling becomes especially important when we work with large volumes of dynamic text, e.g., Facebook or Twitter datasets. In this study, we summarize the message content of four data sets of Twitter messages relating to challenging social events in Kenya. We use Latent Dirichlet Allocation (LDA) topic modelling to analyze the content. Our study uses two evaluation measures, Normalized Mutual Information (NMI) and topic coherence analysis, to select the best LDA models. The obtained LDA results show that the tool can be effectively used to extract discussion topics and summarize them for further manual analysis
http://arxiv.org/abs/1608.02519v1
"2016-08-08T17:03:03Z"
cs.SI, cs.CL, D.4.8; H.1.2; H.2.8; I.2.7
2,016
Labeling Topics with Images using Neural Networks
Nikolaos Aletras, Arpit Mittal
Topics generated by topic models are usually represented by lists of $t$ terms or alternatively using short phrases and images. The current state-of-the-art work on labeling topics using images selects images by re-ranking a small set of candidates for a given topic. In this paper, we present a more generic method that can estimate the degree of association between any arbitrary pair of an unseen topic and image using a deep neural network. Our method has better runtime performance $O(n)$ compared to $O(n^2)$ for the current state-of-the-art method, and is also significantly more accurate.
http://arxiv.org/abs/1608.00470v2
"2016-08-01T15:27:16Z"
cs.CL, cs.CV
2,016
TopicResponse: A Marriage of Topic Modelling and Rasch Modelling for Automatic Measurement in MOOCs
Jiazhen He, Benjamin I. P. Rubinstein, James Bailey, Rui Zhang, Sandra Milligan
This paper explores the suitability of using automatically discovered topics from MOOC discussion forums for modelling students' academic abilities. The Rasch model from psychometrics is a popular generative probabilistic model that relates latent student skill, latent item difficulty, and observed student-item responses within a principled, unified framework. According to scholarly educational theory, discovered topics can be regarded as appropriate measurement items if (1) students' participation across the discovered topics is well fit by the Rasch model, and if (2) the topics are interpretable to subject-matter experts as being educationally meaningful. Such Rasch-scaled topics, with associated difficulty levels, could be of potential benefit to curriculum refinement, student assessment and personalised feedback. The technical challenge that remains, is to discover meaningful topics that simultaneously achieve good statistical fit with the Rasch model. To address this challenge, we combine the Rasch model with non-negative matrix factorisation based topic modelling, jointly fitting both models. We demonstrate the suitability of our approach with quantitative experiments on data from three Coursera MOOCs, and with qualitative survey results on topic interpretability on a Discrete Optimisation MOOC.
http://arxiv.org/abs/1607.08720v2
"2016-07-29T08:17:45Z"
cs.LG, cs.CL, cs.IR, stat.ML
2,016
Diagnostic Prediction Using Discomfort Drawings with IBTM
Cheng Zhang, Hedvig Kjellstrom, Carl Henrik Ek, Bo C. Bertilson
In this paper, we explore the possibility to apply machine learning to make diagnostic predictions using discomfort drawings. A discomfort drawing is an intuitive way for patients to express discomfort and pain related symptoms. These drawings have proven to be an effective method to collect patient data and make diagnostic decisions in real-life practice. A dataset from real-world patient cases is collected for which medical experts provide diagnostic labels. Next, we use a factorized multimodal topic model, Inter-Battery Topic Model (IBTM), to train a system that can make diagnostic predictions given an unseen discomfort drawing. The number of output diagnostic labels is determined by using mean-shift clustering on the discomfort drawing. Experimental results show reasonable predictions of diagnostic labels given an unseen discomfort drawing. Additionally, we generate synthetic discomfort drawings with IBTM given a diagnostic label, which results in typical cases of symptoms. The positive result indicates a significant potential of machine learning to be used for parts of the pain diagnostic process and to be a decision support system for physicians and other health care personnel.
http://arxiv.org/abs/1607.08206v2
"2016-07-27T18:20:01Z"
cs.LG
2,016
Single Stage Prediction with Embedded Topic Modeling of Online Reviews for Mobile App Management
Shawn Mankad, Shengli Hu, Anandasivam Gopal
Mobile apps are one of the building blocks of the mobile digital economy. A differentiating feature of mobile apps to traditional enterprise software is online reviews, which are available on app marketplaces and represent a valuable source of consumer feedback on the app. We create a supervised topic modeling approach for app developers to use mobile reviews as useful sources of quality and customer feedback, thereby complementing traditional software testing. The approach is based on a constrained matrix factorization that leverages the relationship between term frequency and a given response variable in addition to co-occurrences between terms to recover topics that are both predictive of consumer sentiment and useful for understanding the underlying textual themes. The factorization is combined with ordinal regression to provide guidance from online reviews on a single app's performance as well as systematically compare different apps over time for benchmarking of features and consumer sentiment. We apply our approach using a dataset of over 100,000 mobile reviews over several years for three of the most popular online travel agent apps from the iTunes and Google Play marketplaces.
http://arxiv.org/abs/1607.07515v3
"2016-07-26T01:23:17Z"
stat.AP, cs.IR, cs.SE
2,016
An Adaptation of Topic Modeling to Sentences
Ruey-Cheng Chen, Reid Swanson, Andrew S. Gordon
Advances in topic modeling have yielded effective methods for characterizing the latent semantics of textual data. However, applying standard topic modeling approaches to sentence-level tasks introduces a number of challenges. In this paper, we adapt the approach of latent-Dirichlet allocation to include an additional layer for incorporating information about the sentence boundaries in documents. We show that the addition of this minimal information of document structure improves the perplexity results of a trained model.
http://arxiv.org/abs/1607.05818v1
"2016-07-20T04:22:50Z"
cs.CL
2,016
Combining Random Walks and Nonparametric Bayesian Topic Model for Community Detection
Ruimin Zhu, Wenxin Jiang
Community detection has been an active research area for decades. Among all probabilistic models, Stochastic Block Model has been the most popular one. This paper introduces a novel probabilistic model: RW-HDP, based on random walks and Hierarchical Dirichlet Process, for community extraction. In RW-HDP, random walks conducted in a social network are treated as documents; nodes are treated as words. By using Hierarchical Dirichlet Process, a nonparametric Bayesian model, we are not only able to cluster nodes into different communities, but also determine the number of communities automatically. We use Stochastic Variational Inference for our model inference, which makes our method time efficient and can be easily extended to an online learning algorithm.
http://arxiv.org/abs/1607.05573v2
"2016-07-19T13:46:15Z"
stat.AP, stat.ML
2,016
Exploring the Political Agenda of the European Parliament Using a Dynamic Topic Modeling Approach
Derek Greene, James P. Cross
This study analyzes the political agenda of the European Parliament (EP) plenary, how it has evolved over time, and the manner in which Members of the European Parliament (MEPs) have reacted to external and internal stimuli when making plenary speeches. To unveil the plenary agenda and detect latent themes in legislative speeches over time, MEP speech content is analyzed using a new dynamic topic modeling method based on two layers of Non-negative Matrix Factorization (NMF). This method is applied to a new corpus of all English language legislative speeches in the EP plenary from the period 1999-2014. Our findings suggest that two-layer NMF is a valuable alternative to existing dynamic topic modeling approaches found in the literature, and can unveil niche topics and associated vocabularies not captured by existing methods. Substantively, our findings suggest that the political agenda of the EP evolves significantly over time and reacts to exogenous events such as EU Treaty referenda and the emergence of the Euro-crisis. MEP contributions to the plenary agenda are also found to be impacted upon by voting behaviour and the committee structure of the Parliament.
http://arxiv.org/abs/1607.03055v1
"2016-07-11T17:48:53Z"
cs.CL, cs.CY
2,016
Translating Bayesian Networks into Entity Relationship Models, Extended Version
Frank Rosner, Alexander Hinneburg
Big data analytics applications drive the convergence of data management and machine learning. But there is no conceptual language available that is spoken in both worlds. The main contribution of the paper is a method to translate Bayesian networks, a main conceptual language for probabilistic graphical models, into usable entity relationship models. The transformed representation of a Bayesian network leaves out mathematical details about probabilistic relationships but unfolds all information relevant for data management tasks. As a real world example, we present the TopicExplorer system that uses Bayesian topic models as a core component in an interactive, database-supported web application. Last, we sketch a conceptual framework that eases machine learning specific development tasks while building big data analytics applications.
http://arxiv.org/abs/1607.02399v1
"2016-07-08T15:06:46Z"
cs.DB, cs.AI
2,016
Tensor Decomposition for Signal Processing and Machine Learning
Nicholas D. Sidiropoulos, Lieven De Lathauwer, Xiao Fu, Kejun Huang, Evangelos E. Papalexakis, Christos Faloutsos
Tensors or {\em multi-way arrays} are functions of three or more indices $(i,j,k,\cdots)$ -- similar to matrices (two-way arrays), which are functions of two indices $(r,c)$ for (row,column). Tensors have a rich history, stretching over almost a century, and touching upon numerous disciplines; but they have only recently become ubiquitous in signal and data analytics at the confluence of signal processing, statistics, data mining and machine learning. This overview article aims to provide a good starting point for researchers and practitioners interested in learning about and working with tensors. As such, it focuses on fundamentals and motivation (using various application examples), aiming to strike an appropriate balance of breadth {\em and depth} that will enable someone having taken first graduate courses in matrix algebra and probability to get started doing research and/or developing tensor algorithms and software. Some background in applied optimization is useful but not strictly required. The material covered includes tensor rank and rank decomposition; basic tensor factorization models and their relationships and properties (including fairly good coverage of identifiability); broad coverage of algorithms ranging from alternating optimization to stochastic gradient; statistical performance analysis; and applications ranging from source separation to collaborative filtering, mixture and topic modeling, classification, and multilinear subspace learning.
http://arxiv.org/abs/1607.01668v2
"2016-07-06T15:22:31Z"
stat.ML, cs.LG, cs.NA, math.NA
2,016
Temporal Topic Analysis with Endogenous and Exogenous Processes
Baiyang Wang, Diego Klabjan
We consider the problem of modeling temporal textual data taking endogenous and exogenous processes into account. Such text documents arise in real world applications, including job advertisements and economic news articles, which are influenced by the fluctuations of the general economy. We propose a hierarchical Bayesian topic model which imposes a "group-correlated" hierarchical structure on the evolution of topics over time incorporating both processes, and show that this model can be estimated from Markov chain Monte Carlo sampling methods. We further demonstrate that this model captures the intrinsic relationships between the topic distribution and the time-dependent factors, and compare its performance with latent Dirichlet allocation (LDA) and two other related models. The model is applied to two collections of documents to illustrate its empirical performance: online job advertisements from DirectEmployers Association and journalists' postings on BusinessInsider.com.
http://arxiv.org/abs/1607.01274v1
"2016-07-04T01:16:55Z"
cs.CL, cs.IR, cs.LG
2,016
SentiBubbles: Topic Modeling and Sentiment Visualization of Entity-centric Tweets
João Oliveira, Mike Pinto, Pedro Saleiro, Jorge Teixeira
Social Media users tend to mention entities when reacting to news events. The main purpose of this work is to create entity-centric aggregations of tweets on a daily basis. By applying topic modeling and sentiment analysis, we create data visualization insights about current events and people reactions to those events from an entity-centric perspective.
http://arxiv.org/abs/1607.00167v2
"2016-07-01T09:15:13Z"
cs.SI, cs.CL, cs.IR
2,016
Dynamic Hierarchical Dirichlet Process for Abnormal Behaviour Detection in Video
Olga Isupova, Danil Kuzin, Lyudmila Mihaylova
This paper proposes a novel dynamic Hierarchical Dirichlet Process topic model that considers the dependence between successive observations. Conventional posterior inference algorithms for this kind of models require processing of the whole data through several passes. It is computationally intractable for massive or sequential data. We design the batch and online inference algorithms, based on the Gibbs sampling, for the proposed model. It allows to process sequential data, incrementally updating the model by a new observation. The model is applied to abnormal behaviour detection in video sequences. A new abnormality measure is proposed for decision making. The proposed method is compared with the method based on the non- dynamic Hierarchical Dirichlet Process, for which we also derive the online Gibbs sampler and the abnormality measure. The results with synthetic and real data show that the consideration of the dynamics in a topic model improves the classification performance for abnormal behaviour detection.
http://arxiv.org/abs/1606.08476v1
"2016-06-27T20:29:42Z"
stat.ML
2,016
Anomaly detection in video with Bayesian nonparametrics
Olga Isupova, Danil Kuzin, Lyudmila Mihaylova
A novel dynamic Bayesian nonparametric topic model for anomaly detection in video is proposed in this paper. Batch and online Gibbs samplers are developed for inference. The paper introduces a new abnormality measure for decision making. The proposed method is evaluated on both synthetic and real data. The comparison with a non-dynamic model shows the superiority of the proposed dynamic one in terms of the classification performance for anomaly detection.
http://arxiv.org/abs/1606.08455v1
"2016-06-27T20:01:08Z"
stat.ML
2,016
Summarizing Decisions in Spoken Meetings
Lu Wang, Claire Cardie
This paper addresses the problem of summarizing decisions in spoken meetings: our goal is to produce a concise {\it decision abstract} for each meeting decision. We explore and compare token-level and dialogue act-level automatic summarization methods using both unsupervised and supervised learning frameworks. In the supervised summarization setting, and given true clusterings of decision-related utterances, we find that token-level summaries that employ discourse context can approach an upper bound for decision abstracts derived directly from dialogue acts. In the unsupervised summarization setting,we find that summaries based on unsupervised partitioning of decision-related utterances perform comparably to those based on partitions generated using supervised techniques (0.22 ROUGE-F1 using LDA-based topic models vs. 0.23 using SVMs).
http://arxiv.org/abs/1606.07965v1
"2016-06-25T20:45:14Z"
cs.CL
2,016
Unsupervised Topic Modeling Approaches to Decision Summarization in Spoken Meetings
Lu Wang, Claire Cardie
We present a token-level decision summarization framework that utilizes the latent topic structures of utterances to identify "summary-worthy" words. Concretely, a series of unsupervised topic models is explored and experimental results show that fine-grained topic models, which discover topics at the utterance-level rather than the document-level, can better identify the gist of the decision-making process. Moreover, our proposed token-level summarization approach, which is able to remove redundancies within utterances, outperforms existing utterance ranking based summarization methods. Finally, context information is also investigated to add additional relevant information to the summary.
http://arxiv.org/abs/1606.07829v1
"2016-06-24T20:17:44Z"
cs.CL
2,016
Visualizing textual models with in-text and word-as-pixel highlighting
Abram Handler, Su Lin Blodgett, Brendan O'Connor
We explore two techniques which use color to make sense of statistical text models. One method uses in-text annotations to illustrate a model's view of particular tokens in particular documents. Another uses a high-level, "words-as-pixels" graphic to display an entire corpus. Together, these methods offer both zoomed-in and zoomed-out perspectives into a model's understanding of text. We show how these interconnected methods help diagnose a classifier's poor performance on Twitter slang, and make sense of a topic model on historical political texts.
http://arxiv.org/abs/1606.06352v1
"2016-06-20T22:30:19Z"
stat.ML, cs.CL, cs.LG
2,016
SMS Spam Filtering using Probabilistic Topic Modelling and Stacked Denoising Autoencoder
Noura Al Moubayed, Toby Breckon, Peter Matthews, A. Stephen McGough
In This paper we present a novel approach to spam filtering and demonstrate its applicability with respect to SMS messages. Our approach requires minimum features engineering and a small set of la- belled data samples. Features are extracted using topic modelling based on latent Dirichlet allocation, and then a comprehensive data model is created using a Stacked Denoising Autoencoder (SDA). Topic modelling summarises the data providing ease of use and high interpretability by visualising the topics using word clouds. Given that the SMS messages can be regarded as either spam (unwanted) or ham (wanted), the SDA is able to model the messages and accurately discriminate between the two classes without the need for a pre-labelled training set. The results are compared against the state-of-the-art spam detection algorithms with our proposed approach achieving over 97% accuracy which compares favourably to the best reported algorithms presented in the literature.
http://arxiv.org/abs/1606.05554v1
"2016-06-17T15:15:18Z"
cs.CL, cs.LG, cs.NE
2,016
Retrieving and Ranking Similar Questions from Question-Answer Archives Using Topic Modelling and Topic Distribution Regression
Pedro Chahuara, Thomas Lampert, Pierre Gancarski
Presented herein is a novel model for similar question ranking within collaborative question answer platforms. The presented approach integrates a regression stage to relate topics derived from questions to those derived from question-answer pairs. This helps to avoid problems caused by the differences in vocabulary used within questions and answers, and the tendency for questions to be shorter than answers. The performance of the model is shown to outperform translation methods and topic modelling (without regression) on several real-world datasets.
http://arxiv.org/abs/1606.03783v1
"2016-06-12T23:50:19Z"
cs.IR, cs.CL, cs.LG
2,016
Sub-Story Detection in Twitter with Hierarchical Dirichlet Processes
P. K. Srijith, Mark Hepple, Kalina Bontcheva, Daniel Preotiuc-Pietro
Social media has now become the de facto information source on real world events. The challenge, however, due to the high volume and velocity nature of social media streams, is in how to follow all posts pertaining to a given event over time, a task referred to as story detection. Moreover, there are often several different stories pertaining to a given event, which we refer to as sub-stories and the corresponding task of their automatic detection as sub-story detection. This paper proposes hierarchical Dirichlet processes (HDP), a probabilistic topic model, as an effective method for automatic sub-story detection. HDP can learn sub-topics associated with sub-stories which enables it to handle subtle variations in sub-stories. It is compared with state- of-the-art story detection approaches based on locality sensitive hashing and spectral clustering. We demonstrate the superior performance of HDP for sub-story detection on real world Twitter data sets using various evaluation measures. The ability of HDP to learn sub-topics helps it to recall the sub- stories with high precision. Another contribution of this paper is in demonstrating that the conversational structures within the Twitter stream can be used to improve sub-story detection performance significantly.
http://arxiv.org/abs/1606.03561v1
"2016-06-11T06:52:18Z"
cs.IR, cs.SI
2,016
Generative Topic Embedding: a Continuous Representation of Documents (Extended Version with Proofs)
Shaohua Li, Tat-Seng Chua, Jun Zhu, Chunyan Miao
Word embedding maps words into a low-dimensional continuous embedding space by exploiting the local word collocation patterns in a small context window. On the other hand, topic modeling maps documents onto a low-dimensional topic space, by utilizing the global word collocation patterns in the same document. These two types of patterns are complementary. In this paper, we propose a generative topic embedding model to combine the two types of patterns. In our model, topics are represented by embedding vectors, and are shared across documents. The probability of each word is influenced by both its local context and its topic. A variational inference method yields the topic embeddings as well as the topic mixing proportions for each document. Jointly they represent the document in a low-dimensional continuous space. In two document classification tasks, our method performs better than eight existing methods, with fewer features. In addition, we illustrate with an example that our method can generate coherent topics even based on only one document.
http://arxiv.org/abs/1606.02979v2
"2016-06-09T14:45:39Z"
cs.CL, cs.AI, cs.IR, cs.LG, stat.ML
2,016
Source-LDA: Enhancing probabilistic topic models using prior knowledge sources
Justin Wood, Patrick Tan, Wei Wang, Corey Arnold
A popular approach to topic modeling involves extracting co-occurring n-grams of a corpus into semantic themes. The set of n-grams in a theme represents an underlying topic, but most topic modeling approaches are not able to label these sets of words with a single n-gram. Such labels are useful for topic identification in summarization systems. This paper introduces a novel approach to labeling a group of n-grams comprising an individual topic. The approach taken is to complement the existing topic distributions over words with a known distribution based on a predefined set of topics. This is done by integrating existing labeled knowledge sources representing known potential topics into the probabilistic topic model. These knowledge sources are translated into a distribution and used to set the hyperparameters of the Dirichlet generated distribution over words. In the inference these modified distributions guide the convergence of the latent topics to conform with the complementary distributions. This approach ensures that the topic inference process is consistent with existing knowledge. The label assignment from the complementary knowledge sources are then transferred to the latent topics of the corpus. The results show both accurate label assignment to topics as well as improved topic generation than those obtained using various labeling approaches based off Latent Dirichlet allocation (LDA).
http://arxiv.org/abs/1606.00577v3
"2016-06-02T08:15:15Z"
cs.CL, cs.IR, cs.LG
2,016
Temporal Topic Modeling to Assess Associations between News Trends and Infectious Disease Outbreaks
Saurav Ghosh, Prithwish Chakraborty, Elaine O. Nsoesie, Emily Cohn, Sumiko R. Mekaru, John S. Brownstein, Naren Ramakrishnan
In retrospective assessments, internet news reports have been shown to capture early reports of unknown infectious disease transmission prior to official laboratory confirmation. In general, media interest and reporting peaks and wanes during the course of an outbreak. In this study, we quantify the extent to which media interest during infectious disease outbreaks is indicative of trends of reported incidence. We introduce an approach that uses supervised temporal topic models to transform large corpora of news articles into temporal topic trends. The key advantages of this approach include, applicability to a wide range of diseases, and ability to capture disease dynamics - including seasonality, abrupt peaks and troughs. We evaluated the method using data from multiple infectious disease outbreaks reported in the United States of America (U.S.), China and India. We noted that temporal topic trends extracted from disease-related news reports successfully captured the dynamics of multiple outbreaks such as whooping cough in U.S. (2012), dengue outbreaks in India (2013) and China (2014). Our observations also suggest that efficient modeling of temporal topic trends using time-series regression techniques can estimate disease case counts with increased precision before official reports by health organizations.
http://arxiv.org/abs/1606.00411v1
"2016-06-01T19:30:07Z"
cs.SI, cs.CL, cs.IR, stat.ML
2,016
On a Topic Model for Sentences
Georgios Balikas, Massih-Reza Amini, Marianne Clausel
Probabilistic topic models are generative models that describe the content of documents by discovering the latent topics underlying them. However, the structure of the textual input, and for instance the grouping of words in coherent text spans such as sentences, contains much information which is generally lost with these models. In this paper, we propose sentenceLDA, an extension of LDA whose goal is to overcome this limitation by incorporating the structure of the text in the generative and inference processes. We illustrate the advantages of sentenceLDA by comparing it with LDA using both intrinsic (perplexity) and extrinsic (text classification) evaluation tasks on different text collections.
http://arxiv.org/abs/1606.00253v1
"2016-06-01T12:34:50Z"
cs.CL, cs.IR, cs.LG
2,016
Spectral Methods for Correlated Topic Models
Forough Arabshahi, Animashree Anandkumar
In this paper, we propose guaranteed spectral methods for learning a broad range of topic models, which generalize the popular Latent Dirichlet Allocation (LDA). We overcome the limitation of LDA to incorporate arbitrary topic correlations, by assuming that the hidden topic proportions are drawn from a flexible class of Normalized Infinitely Divisible (NID) distributions. NID distributions are generated through the process of normalizing a family of independent Infinitely Divisible (ID) random variables. The Dirichlet distribution is a special case obtained by normalizing a set of Gamma random variables. We prove that this flexible topic model class can be learned via spectral methods using only moments up to the third order, with (low order) polynomial sample and computational complexity. The proof is based on a key new technique derived here that allows us to diagonalize the moments of the NID distribution through an efficient procedure that requires evaluating only univariate integrals, despite the fact that we are handling high dimensional multivariate moments. In order to assess the performance of our proposed Latent NID topic model, we use two real datasets of articles collected from New York Times and Pubmed. Our experiments yield improved perplexity on both datasets compared with the baseline.
http://arxiv.org/abs/1605.09080v5
"2016-05-30T00:32:11Z"
cs.LG, stat.ML
2,016
Online Bayesian Collaborative Topic Regression
Chenghao Liu, Tao Jin, Steven C. H. Hoi, Peilin Zhao, Jianling Sun
Collaborative Topic Regression (CTR) combines ideas of probabilistic matrix factorization (PMF) and topic modeling (e.g., LDA) for recommender systems, which has gained increasing successes in many applications. Despite enjoying many advantages, the existing CTR algorithms have some critical limitations. First of all, they are often designed to work in a batch learning manner, making them unsuitable to deal with streaming data or big data in real-world recommender systems. Second, the document-specific topic proportions of LDA are fed to the downstream PMF, but not reverse, which is sub-optimal as the rating information is not exploited in discovering the low-dimensional representation of documents and thus can result in a sub-optimal representation for prediction. In this paper, we propose a novel scheme of Online Bayesian Collaborative Topic Regression (OBCTR) which is efficient and scalable for learning from data streams. Particularly, we {\it jointly} optimize the combined objective function of both PMF and LDA in an online learning fashion, in which both PMF and LDA tasks can be reinforced each other during the online learning process. Our encouraging experimental results on real-world data validate the effectiveness of the proposed method.
http://arxiv.org/abs/1605.08872v1
"2016-05-28T10:17:37Z"
cs.LG, cs.IR
2,016
Provable Algorithms for Inference in Topic Models
Sanjeev Arora, Rong Ge, Frederic Koehler, Tengyu Ma, Ankur Moitra
Recently, there has been considerable progress on designing algorithms with provable guarantees -- typically using linear algebraic methods -- for parameter learning in latent variable models. But designing provable algorithms for inference has proven to be more challenging. Here we take a first step towards provable inference in topic models. We leverage a property of topic models that enables us to construct simple linear estimators for the unknown topic proportions that have small variance, and consequently can work with short documents. Our estimators also correspond to finding an estimate around which the posterior is well-concentrated. We show lower bounds that for shorter documents it can be information theoretically impossible to find the hidden topics. Finally, we give empirical results that demonstrate that our algorithm works on realistic topic models. It yields good solutions on synthetic data and runs in time comparable to a {\em single} iteration of Gibbs sampling.
http://arxiv.org/abs/1605.08491v1
"2016-05-27T02:18:43Z"
cs.LG, stat.ML
2,016
Toward a general, scaleable framework for Bayesian teaching with applications to topic models
Baxter S. Eaves Jr, Patrick Shafto
Machines, not humans, are the world's dominant knowledge accumulators but humans remain the dominant decision makers. Interpreting and disseminating the knowledge accumulated by machines requires expertise, time, and is prone to failure. The problem of how best to convey accumulated knowledge from computers to humans is a critical bottleneck in the broader application of machine learning. We propose an approach based on human teaching where the problem is formalized as selecting a small subset of the data that will, with high probability, lead the human user to the correct inference. This approach, though successful for modeling human learning in simple laboratory experiments, has failed to achieve broader relevance due to challenges in formulating general and scalable algorithms. We propose general-purpose teaching via pseudo-marginal sampling and demonstrate the algorithm by teaching topic models. Simulation results show our sampling-based approach: effectively approximates the probability where ground-truth is possible via enumeration, results in data that are markedly different from those expected by random sampling, and speeds learning especially for small amounts of data. Application to movie synopsis data illustrates differences between teaching and random sampling for teaching distributions and specific topics, and demonstrates gains in scalability and applicability to real-world problems.
http://arxiv.org/abs/1605.07999v1
"2016-05-25T18:33:10Z"
cs.LG, cs.AI, stat.ML
2,016
Computing Web-scale Topic Models using an Asynchronous Parameter Server
Rolf Jagerman, Carsten Eickhoff, Maarten de Rijke
Topic models such as Latent Dirichlet Allocation (LDA) have been widely used in information retrieval for tasks ranging from smoothing and feedback methods to tools for exploratory search and discovery. However, classical methods for inferring topic models do not scale up to the massive size of today's publicly available Web-scale data sets. The state-of-the-art approaches rely on custom strategies, implementations and hardware to facilitate their asynchronous, communication-intensive workloads. We present APS-LDA, which integrates state-of-the-art topic modeling with cluster computing frameworks such as Spark using a novel asynchronous parameter server. Advantages of this integration include convenient usage of existing data processing pipelines and eliminating the need for disk writes as data can be kept in memory from start to finish. Our goal is not to outperform highly customized implementations, but to propose a general high-performance topic modeling framework that can easily be used in today's data processing pipelines. We compare APS-LDA to the existing Spark LDA implementations and show that our system can, on a 480-core cluster, process up to 135 times more data and 10 times more topics without sacrificing model quality.
http://arxiv.org/abs/1605.07422v3
"2016-05-24T12:40:29Z"
cs.DC, cs.IR, cs.LG, stat.ML
2,016
Latent Tree Models for Hierarchical Topic Detection
Peixian Chen, Nevin L. Zhang, Tengfei Liu, Leonard K. M. Poon, Zhourong Chen, Farhan Khawar
We present a novel method for hierarchical topic detection where topics are obtained by clustering documents in multiple ways. Specifically, we model document collections using a class of graphical models called hierarchical latent tree models (HLTMs). The variables at the bottom level of an HLTM are observed binary variables that represent the presence/absence of words in a document. The variables at other levels are binary latent variables, with those at the lowest latent level representing word co-occurrence patterns and those at higher levels representing co-occurrence of patterns at the level below. Each latent variable gives a soft partition of the documents, and document clusters in the partitions are interpreted as topics. Latent variables at high levels of the hierarchy capture long-range word co-occurrence patterns and hence give thematically more general topics, while those at low levels of the hierarchy capture short-range word co-occurrence patterns and give thematically more specific topics. Unlike LDA-based topic models, HLTMs do not refer to a document generation process and use word variables instead of token variables. They use a tree structure to model the relationships between topics and words, which is conducive to the discovery of meaningful topics and topic hierarchies.
http://arxiv.org/abs/1605.06650v2
"2016-05-21T14:36:33Z"
cs.CL, cs.IR, cs.LG, stat.ML
2,016
Inter-Battery Topic Representation Learning
Cheng Zhang, Hedvig Kjellstrom, Carl Henrik Ek
In this paper, we present the Inter-Battery Topic Model (IBTM). Our approach extends traditional topic models by learning a factorized latent variable representation. The structured representation leads to a model that marries benefits traditionally associated with a discriminative approach, such as feature selection, with those of a generative model, such as principled regularization and ability to handle missing data. The factorization is provided by representing data in terms of aligned pairs of observations as different views. This provides means for selecting a representation that separately models topics that exist in both views from the topics that are unique to a single view. This structured consolidation allows for efficient and robust inference and provides a compact and efficient representation. Learning is performed in a Bayesian fashion by maximizing a rigorous bound on the log-likelihood. Firstly, we illustrate the benefits of the model on a synthetic dataset,. The model is then evaluated in both uni- and multi-modality settings on two different classification tasks with off-the-shelf convolutional neural network (CNN) features which generate state-of-the-art results with extremely compact representations.
http://arxiv.org/abs/1605.06155v2
"2016-05-19T21:44:12Z"
cs.LG, cs.CV
2,016
Self-Reflective Risk-Aware Artificial Cognitive Modeling for Robot Response to Human Behaviors
Fei Han, Christopher Reardon, Lynne E. Parker, Hao Zhang
In order for cooperative robots ("co-robots") to respond to human behaviors accurately and efficiently in human-robot collaboration, interpretation of human actions, awareness of new situations, and appropriate decision making are all crucial abilities for co-robots. For this purpose, the human behaviors should be interpreted by co-robots in the same manner as human peers. To address this issue, a novel interpretability indicator is introduced so that robot actions are appropriate to the current human behaviors. In addition, the complete consideration of all potential situations of a robot's environment is nearly impossible in real-world applications, making it difficult for the co-robot to act appropriately and safely in new scenarios. This is true even when the pretrained model is highly accurate in a known situation. For effective and safe teaming with humans, we introduce a new generalizability indicator that allows a co-robot to self-reflect and reason about when an observation falls outside the co-robot's learned model. Based on topic modeling and two novel indicators, we propose a new Self-reflective Risk-aware Artificial Cognitive (SRAC) model. The co-robots are able to consider action risks and identify new situations so that better decisions can be made. Experiments both using real-world datasets and on physical robots suggest that our SRAC model significantly outperforms the traditional methodology and enables better decision making in response to human activities.
http://arxiv.org/abs/1605.04934v1
"2016-05-16T20:22:30Z"
cs.RO, cs.AI
2,016
Mixing Dirichlet Topic Models and Word Embeddings to Make lda2vec
Christopher E Moody
Distributed dense word vectors have been shown to be effective at capturing token-level semantic and syntactic regularities in language, while topic models can form interpretable representations over documents. In this work, we describe lda2vec, a model that learns dense word vectors jointly with Dirichlet-distributed latent document-level mixtures of topic vectors. In contrast to continuous dense document representations, this formulation produces sparse, interpretable document mixtures through a non-negative simplex constraint. Our method is simple to incorporate into existing automatic differentiation frameworks and allows for unsupervised document representations geared for use by scientists while simultaneously learning word vectors and the linear relationships between them.
http://arxiv.org/abs/1605.02019v1
"2016-05-06T18:13:18Z"
cs.CL
2,016
Tracking Dengue Epidemics using Twitter Content Classification and Topic Modelling
Paolo Missier, Alexander Romanovsky, Tudor Miu, Atinder Pal, Michael Daniilakis, Alessandro Garcia, Diego Cedrim, Leonardo da Silva Sousa
Detecting and preventing outbreaks of mosquito-borne diseases such as Dengue and Zika in Brasil and other tropical regions has long been a priority for governments in affected areas. Streaming social media content, such as Twitter, is increasingly being used for health vigilance applications such as flu detection. However, previous work has not addressed the complexity of drastic seasonal changes on Twitter content across multiple epidemic outbreaks. In order to address this gap, this paper contrasts two complementary approaches to detecting Twitter content that is relevant for Dengue outbreak detection, namely supervised classification and unsupervised clustering using topic modelling. Each approach has benefits and shortcomings. Our classifier achieves a prediction accuracy of about 80\% based on a small training set of about 1,000 instances, but the need for manual annotation makes it hard to track seasonal changes in the nature of the epidemics, such as the emergence of new types of virus in certain geographical locations. In contrast, LDA-based topic modelling scales well, generating cohesive and well-separated clusters from larger samples. While clusters can be easily re-generated following changes in epidemics, however, this approach makes it hard to clearly segregate relevant tweets into well-defined clusters.
http://arxiv.org/abs/1605.00968v1
"2016-05-03T16:22:30Z"
cs.SI
2,016
Dominant Codewords Selection with Topic Model for Action Recognition
Hirokatsu Kataoka, Masaki Hayashi, Kenji Iwata, Yutaka Satoh, Yoshimitsu Aoki, Slobodan Ilic
In this paper, we propose a framework for recognizing human activities that uses only in-topic dominant codewords and a mixture of intertopic vectors. Latent Dirichlet allocation (LDA) is used to develop approximations of human motion primitives; these are mid-level representations, and they adaptively integrate dominant vectors when classifying human activities. In LDA topic modeling, action videos (documents) are represented by a bag-of-words (input from a dictionary), and these are based on improved dense trajectories. The output topics correspond to human motion primitives, such as finger moving or subtle leg motion. We eliminate the impurities, such as missed tracking or changing light conditions, in each motion primitive. The assembled vector of motion primitives is an improved representation of the action. We demonstrate our method on four different datasets.
http://arxiv.org/abs/1605.00324v1
"2016-05-01T23:58:06Z"
cs.CV
2,016
Detecting "Smart" Spammers On Social Network: A Topic Model Approach
Linqing Liu, Yao Lu, Ye Luo, Renxian Zhang, Laurent Itti, Jianwei Lu
Spammer detection on social network is a challenging problem. The rigid anti-spam rules have resulted in emergence of "smart" spammers. They resemble legitimate users who are difficult to identify. In this paper, we present a novel spammer classification approach based on Latent Dirichlet Allocation(LDA), a topic model. Our approach extracts both the local and the global information of topic distribution patterns, which capture the essence of spamming. Tested on one benchmark dataset and one self-collected dataset, our proposed method outperforms other state-of-the-art methods in terms of averaged F1-score.
http://arxiv.org/abs/1604.08504v2
"2016-04-28T16:36:35Z"
cs.CL, cs.SI
2,016
Entities as topic labels: Improving topic interpretability and evaluability combining Entity Linking and Labeled LDA
Federico Nanni, Pablo Ruiz Fabo
In order to create a corpus exploration method providing topics that are easier to interpret than standard LDA topic models, here we propose combining two techniques called Entity linking and Labeled LDA. Our method identifies in an ontology a series of descriptive labels for each document in a corpus. Then it generates a specific topic for each label. Having a direct relation between topics and labels makes interpretation easier; using an ontology as background knowledge limits label ambiguity. As our topics are described with a limited number of clear-cut labels, they promote interpretability, and this may help quantitative evaluation. We illustrate the potential of the approach by applying it in order to define the most relevant topics addressed by each party in the European Parliament's fifth mandate (1999-2004).
http://arxiv.org/abs/1604.07809v1
"2016-04-26T19:31:08Z"
cs.CL
2,016
Analyzing User Preference for Social Image Recommendation
Xianming Liu, Min-Hsuan Tsai, Thomas Huang
With the incredibly growing amount of multimedia data shared on the social media platforms, recommender systems have become an important necessity to ease users' burden on the information overload. In such a scenario, extensive amount of heterogeneous information such as tags, image content, in addition to the user-to-item preferences, is extremely valuable for making effective recommendations. In this paper, we explore a novel hybrid algorithm termed {\em STM}, for image recommendation. STM jointly considers the problem of image content analysis with the users' preferences on the basis of sparse representation. STM is able to tackle the challenges of highly sparse user feedbacks and cold-start problmes in the social network scenario. In addition, our model is based on the classical probabilistic matrix factorization and can be easily extended to incorporate other useful information such as the social relationships. We evaluate our approach with a newly collected 0.3 million social image data set from Flickr. The experimental results demonstrate that sparse topic modeling of the image content leads to more effective recommendations, , with a significant performance gain over the state-of-the-art alternatives.
http://arxiv.org/abs/1604.07044v1
"2016-04-24T15:54:02Z"
cs.IR
2,016
Knowledge model: a method to evaluate an individual's knowledge quantitatively
Gangli Liu
As the quantity of human knowledge increasing rapidly, it is harder and harder to evaluate a knowledge worker's knowledge quantitatively. There are lots of demands for evaluating a knowledge worker's knowledge. For example, accurately finding out a researcher's research concentrations for the last three years; searching for common topics for two scientists with different academic backgrounds; helping a researcher discover his deficiencies on a research field etc. This paper proposes a method named knowledge model to evaluate a knowledge worker's knowledge quantitatively without taking an examination. It records and analyzes an individual's each learning experience, discovering all the involved knowledge points and calculating their shares by analyzing the text learning contents with topic model. It calculates a score for a knowledge point by accumulating the effects of one's all learning experiences about it. A preliminary knowledge evaluating system is developed to testify the practicability of knowledge model.
http://arxiv.org/abs/1604.06252v3
"2016-04-21T10:50:50Z"
cs.HC
2,016
What we write about when we write about causality: Features of causal statements across large-scale social discourse
Thomas C. McAndrew, Joshua C. Bongard, Christopher M. Danforth, Peter S. Dodds, Paul D. H. Hines, James P. Bagrow
Identifying and communicating relationships between causes and effects is important for understanding our world, but is affected by language structure, cognitive and emotional biases, and the properties of the communication medium. Despite the increasing importance of social media, much remains unknown about causal statements made online. To study real-world causal attribution, we extract a large-scale corpus of causal statements made on the Twitter social network platform as well as a comparable random control corpus. We compare causal and control statements using statistical language and sentiment analysis tools. We find that causal statements have a number of significant lexical and grammatical differences compared with controls and tend to be more negative in sentiment than controls. Causal statements made online tend to focus on news and current events, medicine and health, or interpersonal relationships, as shown by topic models. By quantifying the features and potential biases of causality communication, this study improves our understanding of the accuracy of information and opinions found online.
http://arxiv.org/abs/1604.05781v2
"2016-04-20T01:06:50Z"
cs.CY, cs.CL, cs.SI
2,016
Stirling Stuff
John S. Reid
Robert Stirling's patent for what was essentially a new type of engine to create work from heat was submitted in 1816. Its reception was underwhelming and although the idea was sporadically developed, it was eclipsed by the steam engine and, later, the internal combustion engine. Today, though, the environmentally favourable credentials of the Stirling engine principles are driving a resurgence of interest, with modern designs using modern materials. These themes are woven through a historically based narrative that introduces Robert Stirling and his background, a description of his patent and the principles behind his engine, and discusses the now popular model Stirling engines readily available. These topical models, or alternatives made 'in house', form a good platform for investigating some of the thermodynamics governing the performance of engines in general.
http://arxiv.org/abs/1604.02362v2
"2016-04-08T14:48:27Z"
physics.hist-ph
2,016
Sentence Level Recurrent Topic Model: Letting Topics Speak for Themselves
Fei Tian, Bin Gao, Di He, Tie-Yan Liu
We propose Sentence Level Recurrent Topic Model (SLRTM), a new topic model that assumes the generation of each word within a sentence to depend on both the topic of the sentence and the whole history of its preceding words in the sentence. Different from conventional topic models that largely ignore the sequential order of words or their topic coherence, SLRTM gives full characterization to them by using a Recurrent Neural Networks (RNN) based framework. Experimental results have shown that SLRTM outperforms several strong baselines on various tasks. Furthermore, SLRTM can automatically generate sentences given a topic (i.e., topics to sentences), which is a key technology for real world applications such as personalized short text conversation.
http://arxiv.org/abs/1604.02038v2
"2016-04-07T15:29:45Z"
cs.LG, cs.CL, cs.IR
2,016
Combinatorial Topic Models using Small-Variance Asymptotics
Ke Jiang, Suvrit Sra, Brian Kulis
Topic models have emerged as fundamental tools in unsupervised machine learning. Most modern topic modeling algorithms take a probabilistic view and derive inference algorithms based on Latent Dirichlet Allocation (LDA) or its variants. In contrast, we study topic modeling as a combinatorial optimization problem, and propose a new objective function derived from LDA by passing to the small-variance limit. We minimize the derived objective by using ideas from combinatorial optimization, which results in a new, fast, and high-quality topic modeling algorithm. In particular, we show that our results are competitive with popular LDA-based topic modeling approaches, and also discuss the (dis)similarities between our approach and its probabilistic counterparts.
http://arxiv.org/abs/1604.02027v2
"2016-04-07T15:04:16Z"
cs.LG, cs.CL, stat.ML
2,016
A Survey on Bayesian Deep Learning
Hao Wang, Dit-Yan Yeung
A comprehensive artificial intelligence system needs to not only perceive the environment with different `senses' (e.g., seeing and hearing) but also infer the world's conditional (or even causal) relations and corresponding uncertainty. The past decade has seen major advances in many perception tasks such as visual object recognition and speech recognition using deep learning models. For higher-level inference, however, probabilistic graphical models with their Bayesian nature are still more powerful and flexible. In recent years, Bayesian deep learning has emerged as a unified probabilistic framework to tightly integrate deep learning and Bayesian models. In this general framework, the perception of text or images using deep learning can boost the performance of higher-level inference and in turn, the feedback from the inference process is able to enhance the perception of text or images. This survey provides a comprehensive introduction to Bayesian deep learning and reviews its recent applications on recommender systems, topic models, control, etc. Besides, we also discuss the relationship and differences between Bayesian deep learning and other related topics such as Bayesian treatment of neural networks. For a constantly updating project page, please refer to https://github.com/js05212/BayesianDeepLearning-Survey.
http://arxiv.org/abs/1604.01662v4
"2016-04-06T15:35:08Z"
stat.ML, cs.AI, cs.CV, cs.LG, cs.NE
2,016
Topic Model Based Multi-Label Classification from the Crowd
Divya Padmanabhan, Satyanath Bhat, Shirish Shevade, Y. Narahari
Multi-label classification is a common supervised machine learning problem where each instance is associated with multiple classes. The key challenge in this problem is learning the correlations between the classes. An additional challenge arises when the labels of the training instances are provided by noisy, heterogeneous crowdworkers with unknown qualities. We first assume labels from a perfect source and propose a novel topic model where the present as well as the absent classes generate the latent topics and hence the words. We non-trivially extend our topic model to the scenario where the labels are provided by noisy crowdworkers. Extensive experimentation on real world datasets reveals the superior performance of the proposed model. The proposed model learns the qualities of the annotators as well, even with minimal training data.
http://arxiv.org/abs/1604.00783v1
"2016-04-04T09:24:12Z"
cs.LG
2,016
Nonparametric Spherical Topic Modeling with Word Embeddings
Kayhan Batmanghelich, Ardavan Saeedi, Karthik Narasimhan, Sam Gershman
Traditional topic models do not account for semantic regularities in language. Recent distributional representations of words exhibit semantic consistency over directional metrics such as cosine similarity. However, neither categorical nor Gaussian observational distributions used in existing topic models are appropriate to leverage such correlations. In this paper, we propose to use the von Mises-Fisher distribution to model the density of words over a unit sphere. Such a representation is well-suited for directional data. We use a Hierarchical Dirichlet Process for our base topic model and propose an efficient inference algorithm based on Stochastic Variational Inference. This model enables us to naturally exploit the semantic structures of word embeddings while flexibly discovering the number of topics. Experiments demonstrate that our method outperforms competitive approaches in terms of topic coherence on two different text corpora while offering efficient inference.
http://arxiv.org/abs/1604.00126v1
"2016-04-01T04:36:58Z"
cs.CL, cs.IR, cs.LG, stat.ML
2,016
Semantic Properties of Customer Sentiment in Tweets
Eun Hee Ko, Diego Klabjan
An increasing number of people are using online social networking services (SNSs), and a significant amount of information related to experiences in consumption is shared in this new media form. Text mining is an emerging technique for mining useful information from the web. We aim at discovering in particular tweets semantic patterns in consumers' discussions on social media. Specifically, the purposes of this study are twofold: 1) finding similarity and dissimilarity between two sets of textual documents that include consumers' sentiment polarities, two forms of positive vs. negative opinions and 2) driving actual content from the textual data that has a semantic trend. The considered tweets include consumers opinions on US retail companies (e.g., Amazon, Walmart). Cosine similarity and K-means clustering methods are used to achieve the former goal, and Latent Dirichlet Allocation (LDA), a popular topic modeling algorithm, is used for the latter purpose. This is the first study which discover semantic properties of textual data in consumption context beyond sentiment analysis. In addition to major findings, we apply LDA (Latent Dirichlet Allocations) to the same data and drew latent topics that represent consumers' positive opinions and negative opinions on social media.
http://arxiv.org/abs/1603.07624v1
"2016-03-24T15:22:52Z"
cs.CL, cs.IR, cs.SI, stat.ML
2,016
Mapping Out Narrative Structures and Dynamics Using Networks and Textual Information
Semi Min, Juyong Park
Human communication is often executed in the form of a narrative, an account of connected events composed of characters, actions, and settings. A coherent narrative structure is therefore a requisite for a well-formulated narrative -- be it fictional or nonfictional -- for informative and effective communication, opening up the possibility of a deeper understanding of a narrative by studying its structural properties. In this paper we present a network-based framework for modeling and analyzing the structure of a narrative, which is further expanded by incorporating methods from computational linguistics to utilize the narrative text. Modeling a narrative as a dynamically unfolding system, we characterize its progression via the growth patterns of the character network, and use sentiment analysis and topic modeling to represent the actual content of the narrative in the form of interaction maps between characters with associated sentiment values and keywords. This is a network framework advanced beyond the simple occurrence-based one most often used until now, allowing one to utilize the unique characteristics of a given narrative to a high degree. Given the ubiquity and importance of narratives, such advanced network-based representation and analysis framework may lead to a more systematic modeling and understanding of narratives for social interactions, expression of human sentiments, and communication.
http://arxiv.org/abs/1604.03029v1
"2016-03-24T10:59:28Z"
cs.CL, cs.SI, physics.soc-ph
2,016
Profiling vs. Time vs. Content: What does Matter for Top-k Publication Recommendation based on Twitter Profiles? - An Extended Technical Report
Chifumi Nishioka, Ansgar Scherp
So far it is unclear how different factors of a scientific publication recommender system based on users' tweets have an influence on the recommendation performance. We examine three different factors, namely profiling method, temporal decay, and richness of content. Regarding profiling, we compare CF-IDF that replaces terms in TF-IDF by semantic concepts, HCF-IDF as novel hierarchical variant of CF-IDF, and topic modeling. As temporal decay functions, we apply sliding window and exponential decay. In terms of the richness of content, we compare recommendations using both full-texts and titles of publications and using only titles. Overall, the three factors make twelve recommendation strategies. We have conducted an online experiment with 123 participants and compared the strategies in a within-group design. The best recommendations are achieved by the strategy combining CF-IDF, sliding window, and with full-texts. However, the strategies using the novel HCF-IDF profiling method achieve similar results with just using the titles of the publications. Therefore, HCF-IDF can make recommendations when only short and sparse data is available.
http://arxiv.org/abs/1603.07016v3
"2016-03-22T22:43:50Z"
cs.DL, cs.IR
2,016
A System for Probabilistic Linking of Thesauri and Classification Systems
Lisa Posch, Philipp Schaer, Arnim Bleier, Markus Strohmaier
This paper presents a system which creates and visualizes probabilistic semantic links between concepts in a thesaurus and classes in a classification system. For creating the links, we build on the Polylingual Labeled Topic Model (PLL-TM). PLL-TM identifies probable thesaurus descriptors for each class in the classification system by using information from the natural language text of documents, their assigned thesaurus descriptors and their designated classes. The links are then presented to users of the system in an interactive visualization, providing them with an automatically generated overview of the relations between the thesaurus and the classification system.
http://arxiv.org/abs/1603.06485v1
"2016-03-21T16:34:13Z"
cs.AI, cs.CL, cs.DL
2,016
Document Neural Autoregressive Distribution Estimation
Stanislas Lauly, Yin Zheng, Alexandre Allauzen, Hugo Larochelle
We present an approach based on feed-forward neural networks for learning the distribution of textual documents. This approach is inspired by the Neural Autoregressive Distribution Estimator(NADE) model, which has been shown to be a good estimator of the distribution of discrete-valued igh-dimensional vectors. In this paper, we present how NADE can successfully be adapted to the case of textual data, retaining from NADE the property that sampling or computing the probability of observations can be done exactly and efficiently. The approach can also be used to learn deep representations of documents that are competitive to those learned by the alternative topic modeling approaches. Finally, we describe how the approach can be combined with a regular neural network N-gram model and substantially improve its performance, by making its learned representation sensitive to the larger, document-specific context.
http://arxiv.org/abs/1603.05962v1
"2016-03-18T19:24:44Z"
cs.LG, cs.CL
2,016
Predicting health inspection results from online restaurant reviews
Samantha Wong, Hamidreza Chinaei, Frank Rudzicz
Informatics around public health are increasingly shifting from the professional to the public spheres. In this work, we apply linguistic analytics to restaurant reviews, from Yelp, in order to automatically predict official health inspection reports. We consider two types of feature sets, i.e., keyword detection and topic model features, and use these in several classification methods. Our empirical analysis shows that these extracted features can predict public health inspection reports with over 90% accuracy using simple support vector machines.
http://arxiv.org/abs/1603.05673v1
"2016-03-17T20:20:32Z"
cs.CL, cs.LG
2,016
Topic Modeling Using Distributed Word Embeddings
Ramandeep S Randhawa, Parag Jain, Gagan Madan
We propose a new algorithm for topic modeling, Vec2Topic, that identifies the main topics in a corpus using semantic information captured via high-dimensional distributed word embeddings. Our technique is unsupervised and generates a list of topics ranked with respect to importance. We find that it works better than existing topic modeling techniques such as Latent Dirichlet Allocation for identifying key topics in user-generated content, such as emails, chats, etc., where topics are diffused across the corpus. We also find that Vec2Topic works equally well for non-user generated content, such as papers, reports, etc., and for small corpora such as a single-document.
http://arxiv.org/abs/1603.04747v1
"2016-03-15T16:21:58Z"
cs.CL
2,016
Who are Like-minded: Mining User Interest Similarity in Online Social Networks
Chunfeng Yang, Yipeng Zhou, Dah Ming Chiu
In this paper, we mine and learn to predict how similar a pair of users' interests towards videos are, based on demographic (age, gender and location) and social (friendship, interaction and group membership) information of these users. We use the video access patterns of active users as ground truth (a form of benchmark). We adopt tag-based user profiling to establish this ground truth, and justify why it is used instead of video-based methods, or many latent topic models such as LDA and Collaborative Filtering approaches. We then show the effectiveness of the different demographic and social features, and their combinations and derivatives, in predicting user interest similarity, based on different machine-learning methods for combining multiple features. We propose a hybrid tree-encoded linear model for combining the features, and show that it out-performs other linear and treebased models. Our methods can be used to predict user interest similarity when the ground-truth is not available, e.g. for new users, or inactive users whose interests may have changed from old access data, and is useful for video recommendation. Our study is based on a rich dataset from Tencent, a popular service provider of social networks, video services, and various other services in China.
http://arxiv.org/abs/1603.02175v1
"2016-03-07T17:51:42Z"
cs.SI
2,016
Recovering Structured Probability Matrices
Qingqing Huang, Sham M. Kakade, Weihao Kong, Gregory Valiant
We consider the problem of accurately recovering a matrix B of size M by M , which represents a probability distribution over M2 outcomes, given access to an observed matrix of "counts" generated by taking independent samples from the distribution B. How can structural properties of the underlying matrix B be leveraged to yield computationally efficient and information theoretically optimal reconstruction algorithms? When can accurate reconstruction be accomplished in the sparse data regime? This basic problem lies at the core of a number of questions that are currently being considered by different communities, including building recommendation systems and collaborative filtering in the sparse data regime, community detection in sparse random graphs, learning structured models such as topic models or hidden Markov models, and the efforts from the natural language processing community to compute "word embeddings". Our results apply to the setting where B has a low rank structure. For this setting, we propose an efficient algorithm that accurately recovers the underlying M by M matrix using Theta(M) samples. This result easily translates to Theta(M) sample algorithms for learning topic models and learning hidden Markov Models. These linear sample complexities are optimal, up to constant factors, in an extremely strong sense: even testing basic properties of the underlying matrix (such as whether it has rank 1 or 2) requires Omega(M) samples. We provide an even stronger lower bound where distinguishing whether a sequence of observations were drawn from the uniform distribution over M observations versus being generated by an HMM with two hidden states requires Omega(M) observations. This precludes sublinear-sample hypothesis tests for basic properties, such as identity or uniformity, as well as sublinear sample estimators for quantities such as the entropy rate of HMMs.
http://arxiv.org/abs/1602.06586v6
"2016-02-21T21:47:45Z"
cs.LG
2,016
Scaling up Dynamic Topic Models
Arnab Bhadury, Jianfei Chen, Jun Zhu, Shixia Liu
Dynamic topic models (DTMs) are very effective in discovering topics and capturing their evolution trends in time series data. To do posterior inference of DTMs, existing methods are all batch algorithms that scan the full dataset before each update of the model and make inexact variational approximations with mean-field assumptions. Due to a lack of a more scalable inference algorithm, despite the usefulness, DTMs have not captured large topic dynamics. This paper fills this research void, and presents a fast and parallelizable inference algorithm using Gibbs Sampling with Stochastic Gradient Langevin Dynamics that does not make any unwarranted assumptions. We also present a Metropolis-Hastings based $O(1)$ sampler for topic assignments for each word token. In a distributed environment, our algorithm requires very little communication between workers during sampling (almost embarrassingly parallel) and scales up to large-scale applications. We are able to learn the largest Dynamic Topic Model to our knowledge, and learned the dynamics of 1,000 topics from 2.6 million documents in less than half an hour, and our empirical results show that our algorithm is not only orders of magnitude faster than the baselines but also achieves lower perplexity.
http://arxiv.org/abs/1602.06049v1
"2016-02-19T05:55:08Z"
stat.ML, H.4; G.3
2,016
Spectral Learning for Supervised Topic Models
Yong Ren, Yining Wang, Jun Zhu
Supervised topic models simultaneously model the latent topic structure of large collections of documents and a response variable associated with each document. Existing inference methods are based on variational approximation or Monte Carlo sampling, which often suffers from the local minimum defect. Spectral methods have been applied to learn unsupervised topic models, such as latent Dirichlet allocation (LDA), with provable guarantees. This paper investigates the possibility of applying spectral methods to recover the parameters of supervised LDA (sLDA). We first present a two-stage spectral method, which recovers the parameters of LDA followed by a power update method to recover the regression model parameters. Then, we further present a single-phase spectral algorithm to jointly recover the topic distribution matrix as well as the regression weights. Our spectral algorithms are provably correct and computationally efficient. We prove a sample complexity bound for each algorithm and subsequently derive a sufficient condition for the identifiability of sLDA. Thorough experiments on synthetic and real-world datasets verify the theory and demonstrate the practical effectiveness of the spectral algorithms. In fact, our results on a large-scale review rating dataset demonstrate that our single-phase spectral algorithm alone gets comparable or even better performance than state-of-the-art methods, while previous work on spectral methods has rarely reported such promising performance.
http://arxiv.org/abs/1602.06025v1
"2016-02-19T02:07:20Z"
cs.LG, cs.CL, cs.IR, stat.ML
2,016
Semantic Scan: Detecting Subtle, Spatially Localized Events in Text Streams
Abhinav Maurya, Kenton Murray, Yandong Liu, Chris Dyer, William W. Cohen, Daniel B. Neill
Early detection and precise characterization of emerging topics in text streams can be highly useful in applications such as timely and targeted public health interventions and discovering evolving regional business trends. Many methods have been proposed for detecting emerging events in text streams using topic modeling. However, these methods have numerous shortcomings that make them unsuitable for rapid detection of locally emerging events on massive text streams. In this paper, we describe Semantic Scan (SS) that has been developed specifically to overcome these shortcomings in detecting new spatially compact events in text streams. Semantic Scan integrates novel contrastive topic modeling with online document assignment and principled likelihood ratio-based spatial scanning to identify emerging events with unexpected patterns of keywords hidden in text streams. This enables more timely and accurate detection and characterization of anomalous, spatially localized emerging events. Semantic Scan does not require manual intervention or labeled training data, and is robust to noise in real-world text data since it identifies anomalous text patterns that occur in a cluster of new documents rather than an anomaly in a single new document. We compare Semantic Scan to alternative state-of-the-art methods such as Topics over Time, Online LDA, and Labeled LDA on two real-world tasks: (i) a disease surveillance task monitoring free-text Emergency Department chief complaints in Allegheny County, and (ii) an emerging business trend detection task based on Yelp reviews. On both tasks, we find that Semantic Scan provides significantly better event detection and characterization accuracy than competing approaches, while providing up to an order of magnitude speedup.
http://arxiv.org/abs/1602.04393v1
"2016-02-13T22:33:56Z"
cs.IR, stat.ML
2,016
Improved Query Topic Models via Pseudo-Relevant Pólya Document Models
Ronan Cummins
Query-expansion via pseudo-relevance feedback is a popular method of overcoming the problem of vocabulary mismatch and of increasing average retrieval effectiveness. In this paper, we develop a new method that estimates a query topic model from a set of pseudo-relevant documents using a new language modelling framework. We assume that documents are generated via a mixture of multivariate Polya distributions, and we show that by identifying the topical terms in each document, we can appropriately select terms that are likely to belong to the query topic model. The results of experiments on several TREC collections show that the new approach compares favourably to current state-of-the-art expansion methods.
http://arxiv.org/abs/1602.01665v1
"2016-02-04T13:19:31Z"
cs.IR
2,016
"Draw My Topics": Find Desired Topics fast from large scale of Corpus
Jason Dou, Ni Sun, Xiaojun Zou
We develop the "Draw My Topics" toolkit, which provides a fast way to incorporate social scientists' interest into standard topic modelling. Instead of using raw corpus with primitive processing as input, an algorithm based on Vector Space Model and Conditional Entropy are used to connect social scientists' willingness and unsupervised topic models' output. Space for users' adjustment on specific corpus of their interest is also accommodated. We demonstrate the toolkit's use on the Diachronic People's Daily Corpus in Chinese.
http://arxiv.org/abs/1602.01428v1
"2016-02-03T19:44:37Z"
cs.CL, cs.IR
2,016
DOLDA - a regularized supervised topic model for high-dimensional multi-class regression
Måns Magnusson, Leif Jonsson, Mattias Villani
Generating user interpretable multi-class predictions in data rich environments with many classes and explanatory covariates is a daunting task. We introduce Diagonal Orthant Latent Dirichlet Allocation (DOLDA), a supervised topic model for multi-class classification that can handle both many classes as well as many covariates. To handle many classes we use the recently proposed Diagonal Orthant (DO) probit model (Johndrow et al., 2013) together with an efficient Horseshoe prior for variable selection/shrinkage (Carvalho et al., 2010). We propose a computationally efficient parallel Gibbs sampler for the new model. An important advantage of DOLDA is that learned topics are directly connected to individual classes without the need for a reference class. We evaluate the model's predictive accuracy on two datasets and demonstrate DOLDA's advantage in interpreting the generated predictions.
http://arxiv.org/abs/1602.00260v2
"2016-01-31T15:33:10Z"
stat.ML
2,016
Parametric Bayesian Rejuvenation in Ambient Assisted Living through Software-based Thematic 5G Management
Rossi Kamal, Choong Seon Hong
Ameliorating elderly engagement is vital in rejuvenating independent living. However, recommended practices lack realization of personal traits despite socio -economic promise. The recent proliferation of IoT with the advent of smart-objects/things and personalized services pave the way for context-aware service management. Eventually, the major goal of this paper is to develop a context-aware model in predicting engagement of elderly care. Hence, key requirements are identified for elderly engagement, namely (a) discovery of contexts, which are relevant (b) scaling up (over time) of engagement. However, paramount challenges are imposed on this stipulation, such as, un-observability, independence and composite relationship of contexts. Therefore, a Topic-model based model is proposed to address scalability of contexts and its conjugal relationship with engagement. Eventually, systematic framework is demonstrated, which pinpoints key goals of context-aware services by participants' opinions, usage and feed-back.
http://arxiv.org/abs/1601.06258v1
"2016-01-23T09:45:04Z"
cs.CY
2,016
Political Speech Generation
Valentin Kassarnig
In this report we present a system that can generate political speeches for a desired political party. Furthermore, the system allows to specify whether a speech should hold a supportive or opposing opinion. The system relies on a combination of several state-of-the-art NLP methods which are discussed in this report. These include n-grams, Justeson & Katz POS tag filter, recurrent neural networks, and latent Dirichlet allocation. Sequences of words are generated based on probabilities obtained from two underlying models: A language model takes care of the grammatical correctness while a topic model aims for textual consistency. Both models were trained on the Convote dataset which contains transcripts from US congressional floor debates. Furthermore, we present a manual and an automated approach to evaluate the quality of generated speeches. In an experimental evaluation generated speeches have shown very high quality in terms of grammatical correctness and sentence transitions.
http://arxiv.org/abs/1601.03313v2
"2016-01-13T16:58:05Z"
cs.CL
2,016
Scalable Models for Computing Hierarchies in Information Networks
Baoxu Shi, Tim Weninger
Information hierarchies are organizational structures that often used to organize and present large and complex information as well as provide a mechanism for effective human navigation. Fortunately, many statistical and computational models exist that automatically generate hierarchies; however, the existing approaches do not consider linkages in information {\em networks} that are increasingly common in real-world scenarios. Current approaches also tend to present topics as an abstract probably distribution over words, etc rather than as tangible nodes from the original network. Furthermore, the statistical techniques present in many previous works are not yet capable of processing data at Web-scale. In this paper we present the Hierarchical Document Topic Model (HDTM), which uses a distributed vertex-programming process to calculate a nonparametric Bayesian generative model. Experiments on three medium size data sets and the entire Wikipedia dataset show that HDTM can infer accurate hierarchies even over large information networks.
http://arxiv.org/abs/1601.00626v1
"2016-01-04T20:05:19Z"
cs.AI, cs.DL, cs.LG
2,016