Jennifer Williams


2018

pdf bib
Recognizing Emotions in Video Using Multimodal DNN Feature Fusion
Jennifer Williams | Steven Kleinegesse | Ramona Comanescu | Oana Radu
Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML)

We present our system description of input-level multimodal fusion of audio, video, and text for recognition of emotions and their intensities for the 2018 First Grand Challenge on Computational Modeling of Human Multimodal Language. Our proposed approach is based on input-level feature fusion with sequence learning from Bidirectional Long-Short Term Memory (BLSTM) deep neural networks (DNNs). We show that our fusion approach outperforms unimodal predictors. Our system performs 6-way simultaneous classification and regression, allowing for overlapping emotion labels in a video segment. This leads to an overall binary accuracy of 90%, overall 4-class accuracy of 89.2% and an overall mean-absolute-error (MAE) of 0.12. Our work shows that an early fusion technique can effectively predict the presence of multi-label emotions as well as their coarse-grained intensities. The presented multimodal approach creates a simple and robust baseline on this new Grand Challenge dataset. Furthermore, we provide a detailed analysis of emotion intensity distributions as output from our DNN, as well as a related discussion concerning the inherent difficulty of this task.

pdf bib
DNN Multimodal Fusion Techniques for Predicting Video Sentiment
Jennifer Williams | Ramona Comanescu | Oana Radu | Leimin Tian
Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML)

We present our work on sentiment prediction using the benchmark MOSI dataset from the CMU-MultimodalDataSDK. Previous work on multimodal sentiment analysis have been focused on input-level feature fusion or decision-level fusion for multimodal fusion. Here, we propose an intermediate-level feature fusion, which merges weights from each modality (audio, video, and text) during training with subsequent additional training. Moreover, we tested principle component analysis (PCA) for feature selection. We found that applying PCA increases unimodal performance, and multimodal fusion outperforms unimodal models. Our experiments show that our proposed intermediate-level feature fusion outperforms other fusion techniques, and it achieves the best performance with an overall binary accuracy of 74.0% on video+text modalities. Our work also improves feature selection for unimodal sentiment analysis, while proposing a novel and effective multimodal fusion architecture for this task.

2017

pdf bib
Twitter Language Identification Of Similar Languages And Dialects Without Ground Truth
Jennifer Williams | Charlie Dagli
Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial)

We present a new method to bootstrap filter Twitter language ID labels in our dataset for automatic language identification (LID). Our method combines geo-location, original Twitter LID labels, and Amazon Mechanical Turk to resolve missing and unreliable labels. We are the first to compare LID classification performance using the MIRA algorithm and langid.py. We show classifier performance on different versions of our dataset with high accuracy using only Twitter data, without ground truth, and very few training examples. We also show how Platt Scaling can be use to calibrate MIRA classifier output values into a probability distribution over candidate classes, making the output more intuitive. Our method allows for fine-grained distinctions between similar languages and dialects and allows us to rediscover the language composition of our Twitter dataset.

2014

pdf bib
Finding Good Enough: A Task-Based Evaluation of Query Biased Summarization for Cross-Language Information Retrieval
Jennifer Williams | Sharon Tam | Wade Shen
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

2013

pdf bib
A Language-Independent Approach to Automatic Text Difficulty Assessment for Second-Language Learners
Wade Shen | Jennifer Williams | Tamas Marius | Elizabeth Salesky
Proceedings of the Second Workshop on Predicting and Improving Text Readability for Target Reader Populations

pdf bib
Meaning Unit Segmentation in English and Chinese: a New Approach to Discourse Phenomena
Jennifer Williams | Rafael Banchs | Haizhou Li
Proceedings of the Workshop on Discourse in Machine Translation

2012

pdf bib
Extracting and modeling durations for habits and events from Twitter
Jennifer Williams | Graham Katz
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf bib
A New Twitter Verb Lexicon for Natural Language Processing
Jennifer Williams | Graham Katz
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

We describe in-progress work on the creation of a new lexical resource that contains a list of 486 verbs annotated with quantified temporal durations for the events that they describe. This resource is being compiled from more than 14 million tweets from the Twitter microblogging site. We are creating this lexicon of verbs and typical durations to address a gap in the available information that is represented in existing research. The data that is contained in this lexical resource is unlike any existing resources, which have been traditionally comprised from literature excerpts, news stories, and full-length weblogs. The kind of knowledge about how long an event lasts is crucial for natural language processing and is especially useful when the temporal duration of an event is implied. We are using data from Twitter because Twitter is a rich resource since people are publicly posting about real events and real durations of those events throughout the day.

pdf bib
Extracting fine-grained durations for verbs from Twitter
Jennifer Williams
Proceedings of ACL 2012 Student Research Workshop

2002

pdf bib
Deriving semantic knowledge from descriptive texts using an MT system
Eric Nyberg | Teruko Mitamura | Kathryn Baker | David Svoboda | Brian Peterson | Jennifer Williams
Proceedings of the 5th Conference of the Association for Machine Translation in the Americas: Technical Papers

This paper describes the results of a feasibility study which focused on deriving semantic networks from descriptive texts using controlled language. The KANT system [3,6] was used to analyze input paragraphs, producing sentence-level interlingua representations. The interlinguas were merged to construct a paragraph-level representation, which was used to create a semantic network in Conceptual Graph (CG) [1] format. The interlinguas are also translated (using the KANTOO generator) into OWL statements for entry into the Ontology Works electrical power factbase [9]. The system was extended to allow simple querying in natural language.