2014
pdf
bib
A Generative Model for User Simulation in a Spatial Navigation Domain
Aciel Eshky
|
Ben Allison
|
Subramanian Ramamoorthy
|
Mark Steedman
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics
2012
pdf
bib
Generative Goal-Driven User Simulation for Dialog Management
Aciel Eshky
|
Ben Allison
|
Mark Steedman
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
2008
pdf
bib
abs
Authorship Attribution of E-Mail: Comparing Classifiers over a New Corpus for Evaluation
Ben Allison
|
Louise Guthrie
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)
The release of the Enron corpus provided a unique resource for studying aspects of email use, because it is largely unfiltered, and therefore presents a relatively complete collection of emails for a reasonably large number of correspondents. This paper describes a newly created subcorpus of the Enron emails which we suggest can be used to test techniqes for authorship attribution, and further shows the application of three different classification methods to this task to present baseline results. Two of the classifiers used are are standard, and have been shown to perform well in the literature, and one of the classifiers is novel and based on concurrent work that proposes a Bayesian hierarchical distribution for word counts in documents. For each of the classifiers, we present results using six text representations, including use of linguistic structures derived from a parser as well as lexical information.
pdf
bib
abs
Unsupervised Learning-based Anomalous Arabic Text Detection
Nasser Abouzakhar
|
Ben Allison
|
Louise Guthrie
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)
The growing dependence of modern society on the Web as a vital source of information and communication has become inevitable. However, the Web has become an ideal channel for various terrorist organisations to publish their misleading information and send unintelligible messages to communicate with their clients as well. The increase in the number of published anomalous misleading information on the Web has led to an increase in security threats. The existing Web security mechanisms and protocols are not appropriately designed to deal with such recently developed problems. Developing technology to detect anomalous textual information has become one of the major challenges within the NLP community. This paper introduces the problem of anomalous text detection by automatically extracting linguistic features from documents and evaluating those features for patterns of suspicious and/or inconsistent information in Arabic documents. In order to achieve that, we defined specific linguistic features that characterise various Arabic writing styles. Also, the paper introduces the main challenges in Arabic processing and describes the proposed unsupervised learning model for detecting anomalous Arabic textual information.
pdf
bib
abs
Professor or Screaming Beast? Detecting Anomalous Words in Chinese
Wei Liu
|
Ben Allison
|
Louise Guthrie
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)
The Internet has become the most popular platform for communication. However because most of the modern computer keyboard is Latin-based, Asian languages such as Chinese cannot input its characters (Hanzi) directly with these keyboards. As a result, methods for representing Chinese characters using Latin alphabets were introduced. The most popular method among these is the Pinyin input system. Pinyin is also called Romanised Chinese in that it phonetically resembles a Chinese character. Due to the highly ambiguous mapping from Pinyin to Chinese characters, word misuses can occur using standard computer keyboard, and more commonly so in internet chat-rooms or instant messengers where the language used is less formal. In this paper we aim to develop a system that can automatically identify such anomalies, whether they are simple typos or whether they are intentional. After identifying them, the system should suggest the correct word to be used.
pdf
bib
abs
Using a Probabilistic Model of Context to Detect Word Obfuscation
Sanaz Jabbari
|
Ben Allison
|
Louise Guthrie
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)
This paper proposes a distributional model of word use and word meaning which is derived purely from a body of text, and then applies this model to determine whether certain words are used in or out of context. We suggest that we can view the contexts of words as multinomially distributed random variables. We illustrate how using this basic idea, we can formulate the problem of detecting whether or not a word is used in context as a likelihood ratio test. We also define a measure of semantic relatedness between a word and its context using the same model. We assume that words that typically appear together are related, and thus have similar probability distributions and that words used in an unusual way will have probability distributions which are dissimilar from those of their surrounding context. The relatedness of a word to its context is based on Kullback-Leibler divergence between probability distributions assigned to the constituent words in the given sentence. We employed our methods on a defense-oriented application where certain words are substituted with other words in an intercepted communication.
pdf
bib
An Improved Hierarchical Bayesian Model of Language for Document Classification
Ben Allison
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)
2006
pdf
bib
abs
A Closer Look at Skip-gram Modelling
David Guthrie
|
Ben Allison
|
Wei Liu
|
Louise Guthrie
|
Yorick Wilks
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)
Data sparsity is a large problem in natural language processing that refers to the fact that language is a system of rare events, so varied and complex, that even using an extremely large corpus, we can never accurately model all possible strings of words. This paper examines the use of skip-grams (a technique where by n-grams are still stored to model language, but they allow for tokens to be skipped) to overcome the data sparsity problem. We analyze this by computing all possible skip-grams in a training corpus and measure how many adjacent (standard) n-grams these cover in test documents. We examine skip-gram modelling using one to four skips with various amount of training data and test against similar documents as well as documents generated from a machine translation system. In this paper we also determine the amount of extra training data required to achieve skip-gram coverage using standard adjacent tri-grams.
pdf
bib
Towards the Orwellian Nightmare: Separation of Business and Personal Emails
Sanaz Jabbari
|
Ben Allison
|
David Guthrie
|
Louise Guthrie
Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions