Jason Baldridge


2024

pdf bib
Where Do We Go From Here? Multi-scale Allocentric Relational Inferencefrom Natural Spatial Descriptions
Tzuf Paz-Argaman | John Palowitch | Sayali Kulkarni | Jason Baldridge | Reut Tsarfaty
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)

The concept of acquired spatial knowledge is crucial in spatial cognitive research, particularly when it comes to communicating routes. However, NLP navigation studies often overlook the impact of acquired knowledge on textual descriptions. Current navigation studies concentrate on egocentric local descriptions (e.g., ‘it will be on your right’) that require reasoning over the agent’s local perception. These instructions are typically given in a sequence of steps, with each action-step explicitly mentioned and followed by a landmark that the agent can use to verify that they are on the correct path (e.g., ‘turn right and then you will see...’). In contrast, descriptions based on knowledge acquired through a map provide a complete view of the environment and capture its compositionality. These instructions typically contain allocentric relations, are non-sequential, with implicit actions and multiple spatial relations without any verification (e.g., ‘south of Central Park and a block north of a police station’). This paper introduces the Rendezvous (RVS) task and dataset, which includes 10,404 examples of English geospatial instructions for reaching a target location using map-knowledge. Our analysis reveals that RVS exhibits a richer use of spatial allocentric relations, and requires resolving more spatial relations simultaneously compared to previous text-based navigation benchmarks.

2022

pdf bib
Underspecification in Scene Description-to-Depiction Tasks
Ben Hutchinson | Jason Baldridge | Vinodkumar Prabhakaran
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Questions regarding implicitness, ambiguity and underspecification are crucial for understanding the task validity and ethical concerns of multimodal image+text systems, yet have received little attention to date. This position paper maps out a conceptual framework to address this gap, focusing on systems which generate images depicting scenes from scene descriptions. In doing so, we account for how texts and images convey meaning differently. We outline a set of core challenges concerning textual and visual ambiguity, as well as risks that may be amplified by ambiguous and underspecified elements. We propose and discuss strategies for addressing these challenges, including generating visually ambiguous images, and generating a set of diverse images.

2021

pdf bib
MURAL: Multimodal, Multitask Representations Across Languages
Aashi Jain | Mandy Guo | Krishna Srinivasan | Ting Chen | Sneha Kudugunta | Chao Jia | Yinfei Yang | Jason Baldridge
Findings of the Association for Computational Linguistics: EMNLP 2021

Both image-caption pairs and translation pairs provide the means to learn deep representations of and connections between languages. We use both types of pairs in MURAL (MUltimodal, MUltitask Representations Across Languages), a dual encoder that solves two tasks: 1) image-text matching and 2) translation pair matching. By incorporating billions of translation pairs, MURAL extends ALIGN (Jia et al.)–a state-of-the-art dual encoder learned from 1.8 billion noisy image-text pairs. When using the same encoders, MURAL’s performance matches or exceeds ALIGN’s cross-modal retrieval performance on well-resourced languages across several datasets. More importantly, it considerably improves performance on under-resourced languages, showing that text-text learning can overcome a paucity of image-caption examples for these languages. On the Wikipedia Image-Text dataset, for example, MURAL-base improves zero-shot mean recall by 8.1% on average for eight under-resourced languages and by 6.8% on average when fine-tuning. We additionally show that MURAL’s text representations cluster not only with respect to genealogical connections but also based on areal linguistics, such as the Balkan Sprachbund.

pdf bib
PanGEA: The Panoramic Graph Environment Annotation Toolkit
Alexander Ku | Peter Anderson | Jordi Pont Tuset | Jason Baldridge
Proceedings of the Second Workshop on Advances in Language and Vision Research

PanGEA, the Panoramic Graph Environment Annotation toolkit, is a lightweight toolkit for collecting speech and text annotations in photo-realistic 3D environments. PanGEA immerses annotators in a web-based simulation and allows them to move around easily as they speak and/or listen. It includes database and cloud storage integration, plus utilities for automatically aligning recorded speech with manual transcriptions and the virtual pose of the annotators. Out of the box, PanGEA supports two tasks – collecting navigation instructions and navigation instruction following – and it could be easily adapted for annotating walking tours, finding and labeling landmarks or objects, and similar tasks. We share best practices learned from using PanGEA in a 20,000 hour annotation effort to collect the Room-Across-Room dataset. We hope that our open-source annotation toolkit and insights will both expedite future data collection efforts and spur innovation on the kinds of grounded language tasks such environments can support.

pdf bib
Multi-Level Gazetteer-Free Geocoding
Sayali Kulkarni | Shailee Jain | Mohammad Javad Hosseini | Jason Baldridge | Eugene Ie | Li Zhang
Proceedings of Second International Combined Workshop on Spatial Language Understanding and Grounded Communication for Robotics

We present a multi-level geocoding model (MLG) that learns to associate texts to geographic coordinates. The Earth’s surface is represented using space-filling curves that decompose the sphere into a hierarchical grid. MLG balances classification granularity and accuracy by combining losses across multiple levels and jointly predicting cells at different levels simultaneously. It obtains large gains without any gazetteer metadata, demonstrating that it can effectively learn the connection between text spans and coordinates—and thus makes it a gazetteer-free geocoder. Furthermore, MLG obtains state-of-the-art results for toponym resolution on three English datasets without any dataset-specific tuning.

pdf bib
On the Evaluation of Vision-and-Language Navigation Instructions
Ming Zhao | Peter Anderson | Vihan Jain | Su Wang | Alexander Ku | Jason Baldridge | Eugene Ie
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Vision-and-Language Navigation wayfinding agents can be enhanced by exploiting automatically generated navigation instructions. However, existing instruction generators have not been comprehensively evaluated, and the automatic evaluation metrics used to develop them have not been validated. Using human wayfinders, we show that these generators perform on par with or only slightly better than a template-based generator and far worse than human instructors. Furthermore, we discover that BLEU, ROUGE, METEOR and CIDEr are ineffective for evaluating grounded navigation instructions. To improve instruction evaluation, we propose an instruction-trajectory compatibility model that operates without reference instructions. Our model shows the highest correlation with human wayfinding outcomes when scoring individual instructions. For ranking instruction generation systems, if reference instructions are available we recommend using SPICE.

pdf bib
Crisscrossed Captions: Extended Intramodal and Intermodal Semantic Similarity Judgments for MS-COCO
Zarana Parekh | Jason Baldridge | Daniel Cer | Austin Waters | Yinfei Yang
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

By supporting multi-modal retrieval training and evaluation, image captioning datasets have spurred remarkable progress on representation learning. Unfortunately, datasets have limited cross-modal associations: images are not paired with other images, captions are only paired with other captions of the same image, there are no negative associations and there are missing positive cross-modal associations. This undermines research into how inter-modality learning impacts intra-modality tasks. We address this gap with Crisscrossed Captions (CxC), an extension of the MS-COCO dataset with human semantic similarity judgments for 267,095 intra- and inter-modality pairs. We report baseline results on CxC for strong existing unimodal and multimodal models. We also evaluate a multitask dual encoder trained on both image-caption and caption-caption pairs that crucially demonstrates CxC’s value for measuring the influence of intra- and inter-modality learning.

2020

pdf bib
Proceedings of the Third International Workshop on Spatial Language Understanding
Parisa Kordjamshidi | Archna Bhatia | Malihe Alikhani | Jason Baldridge | Mohit Bansal | Marie-Francine Moens
Proceedings of the Third International Workshop on Spatial Language Understanding

pdf bib
Retouchdown: Releasing Touchdown on StreetLearn as a Public Resource for Language Grounding Tasks in Street View
Harsh Mehta | Yoav Artzi | Jason Baldridge | Eugene Ie | Piotr Mirowski
Proceedings of the Third International Workshop on Spatial Language Understanding

The Touchdown dataset (Chen et al., 2019) provides instructions by human annotators for navigation through New York City streets and for resolving spatial descriptions at a given location. To enable the wider research community to work effectively with the Touchdown tasks, we are publicly releasing the 29k raw Street View panoramas needed for Touchdown. We follow the process used for the StreetLearn data release (Mirowski et al., 2019) to check panoramas for personally identifiable information and blur them as necessary. These have been added to the StreetLearn dataset and can be obtained via the same process as used previously for StreetLearn. We also provide a reference implementation for both Touchdown tasks: vision and language navigation (VLN) and spatial description resolution (SDR). We compare our model results to those given in (Chen et al., 2019) and show that the panoramas we have added to StreetLearn support both Touchdown tasks and can be used effectively for further research and comparison.

pdf bib
Proceedings of the First Workshop on Advances in Language and Vision Research
Xin Wang | Jesse Thomason | Ronghang Hu | Xinlei Chen | Peter Anderson | Qi Wu | Asli Celikyilmaz | Jason Baldridge | William Yang Wang
Proceedings of the First Workshop on Advances in Language and Vision Research

pdf bib
Room-Across-Room: Multilingual Vision-and-Language Navigation with Dense Spatiotemporal Grounding
Alexander Ku | Peter Anderson | Roma Patel | Eugene Ie | Jason Baldridge
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

We introduce Room-Across-Room (RxR), a new Vision-and-Language Navigation (VLN) dataset. RxR is multilingual (English, Hindi, and Telugu) and larger (more paths and instructions) than other VLN datasets. It emphasizes the role of language in VLN by addressing known biases in paths and eliciting more references to visible entities. Furthermore, each word in an instruction is time-aligned to the virtual poses of instruction creators and validators. We establish baseline scores for monolingual and multilingual settings and multitask learning when including Room-to-Room annotations (Anderson et al., 2018). We also provide results for a model that learns from synchronized pose traces by focusing only on portions of the panorama attended to in human demonstrations. The size, scope and detail of RxR dramatically expands the frontier for research on embodied language agents in photorealistic simulated environments.

pdf bib
Mapping Natural Language Instructions to Mobile UI Action Sequences
Yang Li | Jiacong He | Xin Zhou | Yuan Zhang | Jason Baldridge
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

We present a new problem: grounding natural language instructions to mobile user interface actions, and create three new datasets for it. For full task evaluation, we create PixelHelp, a corpus that pairs English instructions with actions performed by people on a mobile UI emulator. To scale training, we decouple the language and action data by (a) annotating action phrase spans in How-To instructions and (b) synthesizing grounded descriptions of actions for mobile user interfaces. We use a Transformer to extract action phrase tuples from long-range natural language instructions. A grounding Transformer then contextually represents UI objects using both their content and screen position and connects them to object descriptions. Given a starting screen and instruction, our model achieves 70.59% accuracy on predicting complete ground-truth action sequences in PixelHelp.

2019

pdf bib
Multi-modal Discriminative Model for Vision-and-Language Navigation
Haoshuo Huang | Vihan Jain | Harsh Mehta | Jason Baldridge | Eugene Ie
Proceedings of the Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP)

Vision-and-Language Navigation (VLN) is a natural language grounding task where agents have to interpret natural language instructions in the context of visual scenes in a dynamic environment to achieve prescribed navigation goals. Successful agents must have the ability to parse natural language of varying linguistic styles, ground them in potentially unfamiliar scenes, plan and react with ambiguous environmental feedback. Generalization ability is limited by the amount of human annotated data. In particular, paired vision-language sequence data is expensive to collect. We develop a discriminator that evaluates how well an instruction explains a given path in VLN task using multi-modal alignment. Our study reveals that only a small fraction of the high-quality augmented data from Fried et al., as scored by our discriminator, is useful for training VLN agents with similar performance. We also show that a VLN agent warm-started with pre-trained components from the discriminator outperforms the benchmark success rates of 35.5 by 10% relative measure.

pdf bib
PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification
Yinfei Yang | Yuan Zhang | Chris Tar | Jason Baldridge
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Most existing work on adversarial data generation focuses on English. For example, PAWS (Paraphrase Adversaries from Word Scrambling) consists of challenging English paraphrase identification pairs from Wikipedia and Quora. We remedy this gap with PAWS-X, a new dataset of 23,659 human translated PAWS evaluation pairs in six typologically distinct languages: French, Spanish, German, Chinese, Japanese, and Korean. We provide baseline numbers for three models with different capacity to capture non-local context and sentence structure, and using different multilingual training and evaluation regimes. Multilingual BERT fine-tuned on PAWS English plus machine-translated data performs the best, with a range of 83.1-90.8 accuracy across the non-English languages and an average accuracy gain of 23% over the next best model. PAWS-X shows the effectiveness of deep, multilingual pre-training while also leaving considerable headroom as a new challenge to drive multilingual research that better captures structure and contextual information.

pdf bib
Stay on the Path: Instruction Fidelity in Vision-and-Language Navigation
Vihan Jain | Gabriel Magalhaes | Alexander Ku | Ashish Vaswani | Eugene Ie | Jason Baldridge
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Advances in learning and representations have reinvigorated work that connects language to other modalities. A particularly exciting direction is Vision-and-Language Navigation(VLN), in which agents interpret natural language instructions and visual scenes to move through environments and reach goals. Despite recent progress, current research leaves unclear how much of a role language under-standing plays in this task, especially because dominant evaluation metrics have focused on goal completion rather than the sequence of actions corresponding to the instructions. Here, we highlight shortcomings of current metrics for the Room-to-Room dataset (Anderson et al.,2018b) and propose a new metric, Coverage weighted by Length Score (CLS). We also show that the existing paths in the dataset are not ideal for evaluating instruction following because they are direct-to-goal shortest paths. We join existing short paths to form more challenging extended paths to create a new data set, Room-for-Room (R4R). Using R4R and CLS, we show that agents that receive rewards for instruction fidelity outperform agents that focus on goal completion.

pdf bib
PAWS: Paraphrase Adversaries from Word Scrambling
Yuan Zhang | Jason Baldridge | Luheng He
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

Existing paraphrase identification datasets lack sentence pairs that have high lexical overlap without being paraphrases. Models trained on such data fail to distinguish pairs like flights from New York to Florida and flights from Florida to New York. This paper introduces PAWS (Paraphrase Adversaries from Word Scrambling), a new dataset with 108,463 well-formed paraphrase and non-paraphrase pairs with high lexical overlap. Challenging pairs are generated by controlled word swapping and back translation, followed by fluency and paraphrase judgments by human raters. State-of-the-art models trained on existing datasets have dismal performance on PAWS (<40% accuracy); however, including PAWS training data for these models improves their accuracy to 85% while maintaining performance on existing tasks. In contrast, models that do not capture non-local contextual information fail even with PAWS training examples. As such, PAWS provides an effective instrument for driving further progress on models that better exploit structure, context, and pairwise comparisons.

pdf bib
Text Classification with Few Examples using Controlled Generalization
Abhijit Mahabal | Jason Baldridge | Burcu Karagol Ayan | Vincent Perot | Dan Roth
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

Training data for text classification is often limited in practice, especially for applications with many output classes or involving many related classification problems. This means classifiers must generalize from limited evidence, but the manner and extent of generalization is task dependent. Current practice primarily relies on pre-trained word embeddings to map words unseen in training to similar seen ones. Unfortunately, this squishes many components of meaning into highly restricted capacity. Our alternative begins with sparse pre-trained representations derived from unlabeled parsed corpora; based on the available training data, we select features that offers the relevant generalizations. This produces task-specific semantic vectors; here, we show that a feed-forward network over these vectors is especially effective in low-data scenarios, compared to existing state-of-the-art methods. By further pairing this network with a convolutional neural network, we keep this edge in low data scenarios and remain competitive when using full training sets.

pdf bib
Large-Scale Representation Learning from Visually Grounded Untranscribed Speech
Gabriel Ilharco | Yuan Zhang | Jason Baldridge
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)

Systems that can associate images with their spoken audio captions are an important step towards visually grounded language learning. We describe a scalable method to automatically generate diverse audio for image captioning datasets. This supports pretraining deep networks for encoding both audio and images, which we do via a dual encoder that learns to align latent representations from both modalities. We show that a masked margin softmax loss for such models is superior to the standard triplet loss. We fine-tune these models on the Flickr8k Audio Captions Corpus and obtain state-of-the-art results—improving recall in the top 10 from 29.6% to 49.5%. We also obtain human ratings on retrieval outputs to better assess the impact of incidentally matching image-caption pairs that were not associated in the data, finding that automatic evaluation substantially underestimates the quality of the retrieved results.

pdf bib
Learning Dense Representations for Entity Retrieval
Daniel Gillick | Sayali Kulkarni | Larry Lansing | Alessandro Presta | Jason Baldridge | Eugene Ie | Diego Garcia-Olano
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)

We show that it is feasible to perform entity linking by training a dual encoder (two-tower) model that encodes mentions and entities in the same dense vector space, where candidate entities are retrieved by approximate nearest neighbor search. Unlike prior work, this setup does not rely on an alias table followed by a re-ranker, and is thus the first fully learned entity retrieval model. We show that our dual encoder, trained using only anchor-text links in Wikipedia, outperforms discrete alias table and BM25 baselines, and is competitive with the best comparable results on the standard TACKBP-2010 dataset. In addition, it can retrieve candidates extremely fast, and generalizes well to a new dataset derived from Wikinews. On the modeling side, we demonstrate the dramatic value of an unsupervised negative mining algorithm for this task.

2018

pdf bib
Points, Paths, and Playscapes: Large-scale Spatial Language Understanding Tasks Set in the Real World
Jason Baldridge | Tania Bedrax-Weiss | Daphne Luong | Srini Narayanan | Bo Pang | Fernando Pereira | Radu Soricut | Michael Tseng | Yuan Zhang
Proceedings of the First International Workshop on Spatial Language Understanding

Spatial language understanding is important for practical applications and as a building block for better abstract language understanding. Much progress has been made through work on understanding spatial relations and values in images and texts as well as on giving and following navigation instructions in restricted domains. We argue that the next big advances in spatial language understanding can be best supported by creating large-scale datasets that focus on points and paths based in the real world, and then extending these to create online, persistent playscapes that mix human and bot players, where the bot players must learn, evolve, and survive according to their depth of understanding of scenes, navigation, and interactions.

pdf bib
A Fast, Compact, Accurate Model for Language Identification of Codemixed Text
Yuan Zhang | Jason Riesa | Daniel Gillick | Anton Bakalov | Jason Baldridge | David Weiss
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

We address fine-grained multilingual language identification: providing a language code for every token in a sentence, including codemixed text containing multiple languages. Such text is prevalent online, in documents, social media, and message boards. We show that a feed-forward network with a simple globally constrained decoder can accurately and rapidly label both codemixed and monolingual text in 100 languages and 100 language pairs. This model outperforms previously published multilingual approaches in terms of both accuracy and speed, yielding an 800x speed-up and a 19.5% averaged absolute gain on three codemixed datasets. It furthermore outperforms several benchmark systems on monolingual language identification.

pdf bib
Learning To Split and Rephrase From Wikipedia Edit History
Jan A. Botha | Manaal Faruqui | John Alex | Jason Baldridge | Dipanjan Das
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Split and rephrase is the task of breaking down a sentence into shorter ones that together convey the same meaning. We extract a rich new dataset for this task by mining Wikipedia’s edit history: WikiSplit contains one million naturally occurring sentence rewrites, providing sixty times more distinct split examples and a ninety times larger vocabulary than the WebSplit corpus introduced by Narayan et al. (2017) as a benchmark for this task. Incorporating WikiSplit as training data produces a model with qualitatively better predictions that score 32 BLEU points above the prior best result on the WebSplit benchmark.

pdf bib
Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns
Kellie Webster | Marta Recasens | Vera Axelrod | Jason Baldridge
Transactions of the Association for Computational Linguistics, Volume 6

Coreference resolution is an important task for natural language understanding, and the resolution of ambiguous pronouns a longstanding challenge. Nonetheless, existing corpora do not capture ambiguous pronouns in sufficient volume or diversity to accurately indicate the practical utility of models. Furthermore, we find gender bias in existing corpora and systems favoring masculine entities. To address this, we present and release GAP, a gender-balanced labeled corpus of 8,908 ambiguous pronoun–name pairs sampled to provide diverse coverage of challenges posed by real-world text. We explore a range of baselines that demonstrate the complexity of the challenge, the best achieving just 66.9% F1. We show that syntactic structure and continuous neural models provide promising, complementary cues for approaching the challenge.

2016

pdf bib
Creating a Novel Geolocation Corpus from Historical Texts
Grant DeLozier | Ben Wing | Jason Baldridge | Scott Nesbit
Proceedings of the 10th Linguistic Annotation Workshop held in conjunction with ACL 2016 (LAW-X 2016)

2015

pdf bib
A Supertag-Context Model for Weakly-Supervised CCG Parser Learning
Dan Garrette | Chris Dyer | Jason Baldridge | Noah A. Smith
Proceedings of the Nineteenth Conference on Computational Natural Language Learning

pdf bib
Parse Imputation for Dependency Annotations
Jason Mielens | Liang Sun | Jason Baldridge
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

2014

pdf bib
Weakly-Supervised Bayesian Learning of a CCG Supertagger
Dan Garrette | Chris Dyer | Jason Baldridge | Noah A. Smith
Proceedings of the Eighteenth Conference on Computational Natural Language Learning

pdf bib
Parsing low-resource languages using Gibbs sampling for PCFGs with latent annotations
Liang Sun | Jason Mielens | Jason Baldridge
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

pdf bib
Hierarchical Discriminative Classification for Text-Based Geolocation
Benjamin Wing | Jason Baldridge
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

2013

pdf bib
Learning a Part-of-Speech Tagger from Two Hours of Annotation
Dan Garrette | Jason Baldridge
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Real-World Semi-Supervised Learning of POS-Taggers for Low-Resource Languages
Dan Garrette | Jason Mielens | Jason Baldridge
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Text-Driven Toponym Resolution using Indirect Supervision
Michael Speriosu | Jason Baldridge
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
A Framework for (Under)specifying Dependency Syntax without Overloading Annotators
Nathan Schneider | Brendan O’Connor | Naomi Saphra | David Bamman | Manaal Faruqui | Noah A. Smith | Chris Dyer | Jason Baldridge
Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse

2012

pdf bib
Type-Supervised Hidden Markov Models for Part-of-Speech Tagging with Incomplete Tag Dictionaries
Dan Garrette | Jason Baldridge
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

pdf bib
Supervised Text-based Geolocation Using Language Models on an Adaptive Grid
Stephen Roller | Michael Speriosu | Sarat Rallapalli | Benjamin Wing | Jason Baldridge
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

2011

pdf bib
Twitter Polarity Classification with Label Propagation over Lexical Links and the Follower Graph
Michael Speriosu | Nikita Sudan | Sid Upadhyay | Jason Baldridge
Proceedings of the First workshop on Unsupervised Learning in NLP

pdf bib
Simple supervised document geolocation with geodesic grids
Benjamin Wing | Jason Baldridge
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Simple Unsupervised Grammar Induction from Raw Text with Cascaded Finite State Models
Elias Ponvert | Jason Baldridge | Katrin Erk
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Semantic Role Labeling Without Treebanks?
Stephen Boxwell | Chris Brew | Jason Baldridge | Dennis Mehay | Sujith Ravi
Proceedings of 5th International Joint Conference on Natural Language Processing

2010

pdf bib
Crouching Dirichlet, Hidden Markov Model: Unsupervised POS Tagging with Context Local Tag Generation
Taesun Moon | Katrin Erk | Jason Baldridge
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf bib
Minimized Models and Grammar-Informed Initialization for Supertagging with Highly Ambiguous Lexicons
Sujith Ravi | Jason Baldridge | Kevin Knight
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

2009

pdf bib
Supertagging with Factorial Hidden Markov Models
Srivatsan Ramanujam | Jason Baldridge
Proceedings of the 23rd Pacific Asia Conference on Language, Information and Computation, Volume 2

pdf bib
Evaluating Automation Strategies in Language Documentation
Alexis Palmer | Taesun Moon | Jason Baldridge
Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing

pdf bib
How well does active learning actually work? Time-based evaluation of cost-reduction strategies for language documentation.
Jason Baldridge | Alexis Palmer
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf bib
Unsupervised morphological segmentation and clustering with document boundaries
Taesun Moon | Katrin Erk | Jason Baldridge
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

2008

pdf bib
Specialized Models and Ranking for Coreference Resolution
Pascal Denis | Jason Baldridge
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

pdf bib
Teaching Computational Linguistics to a Large, Diverse Student Body: Courses, Tools, and Interdepartmental Interaction
Jason Baldridge | Katrin Erk
Proceedings of the Third Workshop on Issues in Teaching Computational Linguistics

pdf bib
Multidisciplinary Instruction with the Natural Language Toolkit
Steven Bird | Ewan Klein | Edward Loper | Jason Baldridge
Proceedings of the Third Workshop on Issues in Teaching Computational Linguistics

pdf bib
A Logical Basis for the D Combinator and Normal Form in CCG
Frederick Hoyt | Jason Baldridge
Proceedings of ACL-08: HLT

pdf bib
Weakly Supervised Supertagging with Grammar-Informed Initialization
Jason Baldridge
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)

2007

pdf bib
Part-of-Speech Tagging for Middle English through Alignment and Projection of Parallel Diachronic Texts
Taesun Moon | Jason Baldridge
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)

pdf bib
Joint Determination of Anaphoricity and Coreference Resolution using Integer Programming
Pascal Denis | Jason Baldridge
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference

pdf bib
A Sequencing Model for Situation Entity Classification
Alexis Palmer | Elias Ponvert | Jason Baldridge | Carlota Smith
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

2005

pdf bib
Probabilistic Head-Driven Parsing for Discourse Structure
Jason Baldridge | Alex Lascarides
Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)

2004

pdf bib
Generalizing Dimensionality in Combinatory Categorial Grammar
Geert-Jan M. Kruijff | Jason Baldridge
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

pdf bib
Active Learning and the Total Cost of Annotation
Jason Baldridge | Miles Osborne
Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing

pdf bib
Ensemble-based Active Learning for Parse Selection
Miles Osborne | Jason Baldridge
Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004

2003

pdf bib
Active learning for HPSG parse selection
Jason Baldridge | Miles Osborne
Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003

pdf bib
Adapting Chart Realization to CCG
Michael White | Jason Baldridge
Proceedings of the 9th European Workshop on Natural Language Generation (ENLG-2003) at EACL 2003

pdf bib
Multi-Modal Combinatory Categorial Grammar
Jason Baldridge | Geert-Jan M. Kruijff
10th Conference of the European Chapter of the Association for Computational Linguistics

2002

pdf bib
Coupling CCG and Hybrid Logic Dependency Semantics
Jason Baldridge | Geert-Jan Kruijff
Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics

pdf bib
Leo: an Architecture for Sharing Resources for Unification-Based Grammars
Jason Baldridge | John Dowding | Susana Early
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

1998

pdf bib
Description of the UPENN CAMP System as Used for Coreference
Breck Baldwin | Tom Morton | Amit Bagga | Jason Baldridge | Raman Chandraseker | Alexis Dimitriadis | Kieran Snyder | Magdalena Wolska
Seventh Message Understanding Conference (MUC-7): Proceedings of a Conference Held in Fairfax, Virginia, April 29 - May 1, 1998

Search
Co-authors