Rebecca J. Passonneau

Also published as: Rebecca Passonneau


2021

pdf bib
ABCD: A Graph Framework to Convert Complex Sentences to a Covering Set of Simple Sentences
Yanjun Gao | Ting-Hao Huang | Rebecca J. Passonneau
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Atomic clauses are fundamental text units for understanding complex sentences. Identifying the atomic sentences within complex sentences is important for applications such as summarization, argument mining, discourse analysis, discourse parsing, and question answering. Previous work mainly relies on rule-based methods dependent on parsing. We propose a new task to decompose each complex sentence into simple sentences derived from the tensed clauses in the source, and a novel problem formulation as a graph edit task. Our neural model learns to Accept, Break, Copy or Drop elements of a graph that combines word adjacency and grammatical dependencies. The full processing pipeline includes modules for graph construction, graph editing, and sentence generation from the output graph. We introduce DeSSE, a new dataset designed to train and evaluate complex sentence decomposition, and MinWiki, a subset of MinWikiSplit. ABCD achieves comparable performance as two parsing baselines on MinWiki. On DeSSE, which has a more even balance of complex sentence types, our model achieves higher accuracy on the number of atomic sentences than an encoder-decoder baseline. Results include a detailed error analysis.

pdf bib
Learning Clause Representation from Dependency-Anchor Graph for Connective Prediction
Yanjun Gao | Ting-Hao Huang | Rebecca J. Passonneau
Proceedings of the Fifteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-15)

Semantic representation that supports the choice of an appropriate connective between pairs of clauses inherently addresses discourse coherence, which is important for tasks such as narrative understanding, argumentation, and discourse parsing. We propose a novel clause embedding method that applies graph learning to a data structure we refer to as a dependency-anchor graph. The dependency anchor graph incorporates two kinds of syntactic information, constituency structure, and dependency relations, to highlight the subject and verb phrase relation. This enhances coherence-related aspects of representation. We design a neural model to learn a semantic representation for clauses from graph convolution over latent representations of the subject and verb phrase. We evaluate our method on two new datasets: a subset of a large corpus where the source texts are published novels, and a new dataset collected from students’ essays. The results demonstrate a significant improvement over tree-based models, confirming the importance of emphasizing the subject and verb phrase. The performance gap between the two datasets illustrates the challenges of analyzing student’s written text, plus a potential evaluation task for coherence modeling and an application for suggesting revisions to students.

2020

pdf bib
Dialogue Policies for Learning Board Games through Multimodal Communication
Maryam Zare | Ali Ayub | Aishan Liu | Sweekar Sudhakara | Alan Wagner | Rebecca Passonneau
Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue

This paper presents MDP policy learning for agents to learn strategic behavior–how to play board games–during multimodal dialogues. Policies are trained offline in simulation, with dialogues carried out in a formal language. The agent has a temporary belief state for the dialogue, and a persistent knowledge store represented as an extensive-form game tree. How well the agent learns a new game from a dialogue with a simulated partner is evaluated by how well it plays the game, given its dialogue-final knowledge state. During policy training, we control for the simulated dialogue partner’s level of informativeness in responding to questions. The agent learns best when its trained policy matches the current dialogue partner’s informativeness. We also present a novel data collection for training natural language modules. Human subjects who engaged in dialogues with a baseline system rated the system’s language skills as above average. Further, results confirm that human dialogue partners also vary in their informativeness.

2019

pdf bib
Automated Pyramid Summarization Evaluation
Yanjun Gao | Chen Sun | Rebecca J. Passonneau
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)

Pyramid evaluation was developed to assess the content of paragraph length summaries of source texts. A pyramid lists the distinct units of content found in several reference summaries, weights content units by how many reference summaries they occur in, and produces three scores based on the weighted content of new summaries. We present an automated method that is more efficient, more transparent, and more complete than previous automated pyramid methods. It is tested on a new dataset of student summaries, and historical NIST data from extractive summarizers.

pdf bib
Rubric Reliability and Annotation of Content and Argument in Source-Based Argument Essays
Yanjun Gao | Alex Driban | Brennan Xavier McManus | Elena Musi | Patricia Davies | Smaranda Muresan | Rebecca J. Passonneau
Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications

We present a unique dataset of student source-based argument essays to facilitate research on the relations between content, argumentation skills, and assessment. Two classroom writing assignments were given to college students in a STEM major, accompanied by a carefully designed rubric. The paper presents a reliability study of the rubric, showing it to be highly reliable, and initial annotation on content and argumentation annotation of the essays.

2018

pdf bib
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorial Abstracts
Mohit Bansal | Rebecca Passonneau
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorial Abstracts

pdf bib
PyrEval: An Automated Method for Summary Content Analysis
Yanjun Gao | Andrew Warner | Rebecca Passonneau
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf bib
Automated Content Analysis: A Case Study of Computer Science Student Summaries
Yanjun Gao | Patricia M. Davies | Rebecca J. Passonneau
Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications

Technology is transforming Higher Education learning and teaching. This paper reports on a project to examine how and why automated content analysis could be used to assess precis writing by university students. We examine the case of one hundred and twenty-two summaries written by computer science freshmen. The texts, which had been hand scored using a teacher-designed rubric, were autoscored using the Natural Language Processing software, PyrEval. Pearson’s correlation coefficient and Spearman rank correlation were used to analyze the relationship between the teacher score and the PyrEval score for each summary. Three content models automatically constructed by PyrEval from different sets of human reference summaries led to consistent correlations, showing that the approach is reliable. Also observed was that, in cases where the focus of student assessment centers on formative feedback, categorizing the PyrEval scores by examining the average and standard deviations could lead to novel interpretations of their relationships. It is suggested that this project has implications for the ways in which automated content analysis could be used to help university students improve their summarization skills.

2015

pdf bib
Estimation of Discourse Segmentation Labels from Crowd Data
Ziheng Huang | Jialu Zhong | Rebecca J. Passonneau
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf bib
Abstractive Multi-Document Summarization via Phrase Selection and Merging
Lidong Bing | Piji Li | Yi Liao | Wai Lam | Weiwei Guo | Rebecca Passonneau
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

2014

pdf bib
The Benefits of a Model of Annotation
Rebecca J. Passonneau | Bob Carpenter
Transactions of the Association for Computational Linguistics, Volume 2

Standard agreement measures for interannotator reliability are neither necessary nor sufficient to ensure a high quality corpus. In a case study of word sense annotation, conventional methods for evaluating labels from trained annotators are contrasted with a probabilistic annotation model applied to crowdsourced data. The annotation model provides far more information, including a certainty measure for each gold standard label; the crowdsourced data was collected at less than half the cost of the conventional approach.

pdf bib
Aspectual Properties of Conversational Activities
Rebecca J. Passonneau | Boxuan Guan | Cho Ho Yeung | Yuan Du | Emma Conner
Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)

pdf bib
Biber Redux: Reconsidering Dimensions of Variation in American English
Rebecca J. Passonneau | Nancy Ide | Songqiao Su | Jesse Stuart
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

pdf bib
Annotating the MASC Corpus with BabelNet
Andrea Moro | Roberto Navigli | Francesco Maria Tucci | Rebecca J. Passonneau
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

In this paper we tackle the problem of automatically annotating, with both word senses and named entities, the MASC 3.0 corpus, a large English corpus covering a wide range of genres of written and spoken text. We use BabelNet 2.0, a multilingual semantic network which integrates both lexicographic and encyclopedic knowledge, as our sense/entity inventory together with its semantic structure, to perform the aforementioned annotation task. Word sense annotated corpora have been around for more than twenty years, helping the development of Word Sense Disambiguation algorithms by providing both training and testing grounds. More recently Entity Linking has followed the same path, with the creation of huge resources containing annotated named entities. However, to date, there has been no resource that contains both kinds of annotation. In this paper we present an automatic approach for performing this annotation, together with its output on the MASC corpus. We use this corpus because its goal of integrating different types of annotations goes exactly in our same direction. Our overall aim is to stimulate research on the joint exploitation and disambiguation of word senses and named entities. Finally, we estimate the quality of our annotations using both manually-tagged named entities and word senses, obtaining an accuracy of roughly 70% for both named entities and word sense annotations.

2013

pdf bib
Open Dialogue Management for Relational Databases
Ben Hixon | Rebecca J. Passonneau
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
The Benefits of a Model of Annotation
Rebecca J. Passonneau | Bob Carpenter
Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse

pdf bib
Semantic Frames to Predict Stock Price Movement
Boyi Xie | Rebecca J. Passonneau | Leon Wu | Germán G. Creamer
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Automated Pyramid Scoring of Summaries using Distributional Semantics
Rebecca J. Passonneau | Emily Chen | Weiwei Guo | Dolores Perin
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2012

pdf bib
Semantic Specificity in Spoken Dialogue Requests
Ben Hixon | Rebecca J. Passonneau | Susan L. Epstein
Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue

pdf bib
The MASC Word Sense Corpus
Rebecca J. Passonneau | Collin F. Baker | Christiane Fellbaum | Nancy Ide
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

The MASC project has produced a multi-genre corpus with multiple layers of linguistic annotation, together with a sentence corpus containing WordNet 3.1 sense tags for 1000 occurrences of each of 100 words produced by multiple annotators, accompanied by indepth inter-annotator agreement data. Here we give an overview of the contents of MASC and then focus on the word sense sentence corpus, describing the characteristics that differentiate it from other word sense corpora and detailing the inter-annotator agreement studies that have been performed on the annotations. Finally, we discuss the potential to grow the word sense sentence corpus through crowdsourcing and the plan to enhance the content and annotations of MASC through a community-based collaborative effort.

pdf bib
Empirical Comparisons of MASC Word Sense Annotations
Gerard de Melo | Collin F. Baker | Nancy Ide | Rebecca J. Passonneau | Christiane Fellbaum
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

We analyze how different conceptions of lexical semantics affect sense annotations and how multiple sense inventories can be compared empirically, based on annotated text. Our study focuses on the MASC project, where data has been annotated using WordNet sense identifiers on the one hand, and FrameNet lexical units on the other. This allows us to compare the sense inventories of these lexical resources empirically rather than just theoretically, based on their glosses, leading to new insights. In particular, we compute contingency matrices and develop a novel measure, the Expected Jaccard Index, that quantifies the agreement between annotations of the same data based on two different resources even when they have different sets of categories.

2011

pdf bib
Sentiment Analysis of Twitter Data
Apoorv Agarwal | Boyi Xie | Ilia Vovsha | Owen Rambow | Rebecca Passonneau
Proceedings of the Workshop on Language in Social Media (LSM 2011)

pdf bib
Proceedings of the SIGDIAL 2011 Conference
Joyce Y. Chai | Johanna D. Moore | Rebecca J. Passonneau | David R. Traum
Proceedings of the SIGDIAL 2011 Conference

pdf bib
Embedded Wizardry
Rebecca J. Passonneau | Susan L. Epstein | Tiziana Ligorio | Joshua Gordon
Proceedings of the SIGDIAL 2011 Conference

pdf bib
Learning to Balance Grounding Rationales for Dialogue Systems
Joshua Gordon | Rebecca J. Passonneau | Susan L. Epstein
Proceedings of the SIGDIAL 2011 Conference

pdf bib
PARADISE-style Evaluation of a Human-Human Library Corpus
Rebecca J. Passonneau | Irene Alvarado | Phil Crone | Simon Jerome
Proceedings of the SIGDIAL 2011 Conference

2010

pdf bib
Learning about Voice Search for Spoken Dialogue Systems
Rebecca Passonneau | Susan L. Epstein | Tiziana Ligorio | Joshua B. Gordon | Pravin Bhutada
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
Annotation Scheme for Social Network Extraction from Text
Apoorv Agarwal | Owen C. Rambow | Rebecca J. Passonneau
Proceedings of the Fourth Linguistic Annotation Workshop

pdf bib
Anveshan: A Framework for Analysis of Multiple Annotators’ Labeling Behavior
Vikas Bhardwaj | Rebecca Passonneau | Ansaf Salleb-Aouissi | Nancy Ide
Proceedings of the Fourth Linguistic Annotation Workshop

pdf bib
The Manually Annotated Sub-Corpus: A Community Resource for and by the People
Nancy Ide | Collin Baker | Christiane Fellbaum | Rebecca Passonneau
Proceedings of the ACL 2010 Conference Short Papers

pdf bib
Word Sense Annotation of Polysemous Words by Multiple Annotators
Rebecca J. Passonneau | Ansaf Salleb-Aoussi | Vikas Bhardwaj | Nancy Ide
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

We describe results of a word sense annotation task using WordNet, involving half a dozen well-trained annotators on ten polysemous words for three parts of speech. One hundred sentences for each word were annotated. Annotators had the same level of training and experience, but interannotator agreement (IA) varied across words. There was some effect of part of speech, with higher agreement on nouns and adjectives, but within the words for each part of speech there was wide variation. This variation in IA does not correlate with number of senses in the inventory, or the number of senses actually selected by annotators. In fact, IA was sometimes quite high for words with many senses. We claim that the IA variation is due to the word meanings, contexts of use, and individual differences among annotators. We find some correlation of IA with sense confusability as measured by a sense confusion threshhold (CT). Data mining for association rules on a flattened data representation indicating each annotator's sense choices identifies outliers for some words, and systematic differences among pairs of annotators on others.

pdf bib
An Evaluation Framework for Natural Language Understanding in Spoken Dialogue Systems
Joshua B. Gordon | Rebecca J. Passonneau
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

We present an evaluation framework to enable developers of information seeking, transaction based spoken dialogue systems to compare the robustness of natural language understanding (NLU) approaches across varying levels of word error rate and contrasting domains. We develop statistical and semantic parsing based approaches to dialogue act identification and concept retrieval. Voice search is used in each approach to ultimately query the database. Included in the framework is a method for developers to bootstrap a representative pseudo-corpus, which is used to estimate NLU performance in a new domain. We illustrate the relative merits of these NLU techniques by contrasting our statistical NLU approach with a semantic parsing method over two contrasting applications, our CheckItOut library system and the deployed Let’s Go Public! system, across four levels of word error rate. We find that with respect to both dialogue act identification and concept retrieval, our statistical NLU approach is more likely to robustly accommodate the freer form, less constrained utterances of CheckItOut at higher word error rates than is possible with semantic parsing.

2009

pdf bib
Making Sense of Word Sense Variation
Rebecca Passonneau | Ansaf Salleb-Aouissi | Nancy Ide
Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions (SEW-2009)

pdf bib
Contrasting the Interaction Structure of an Email and a Telephone Corpus: A Machine Learning Approach to Annotation of Dialogue Function Units
Jun Hu | Rebecca Passonneau | Owen Rambow
Proceedings of the SIGDIAL 2009 Conference

2008

pdf bib
MASC: the Manually Annotated Sub-Corpus of American English
Nancy Ide | Collin Baker | Christiane Fellbaum | Charles Fillmore | Rebecca Passonneau
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

To answer the critical need for sharable, reusable annotated resources with rich linguistic annotations, we are developing a Manually Annotated Sub-Corpus (MASC) including texts from diverse genres and manual annotations or manually-validated annotations for multiple levels, including WordNet senses and FrameNet frames and frame elements, both of which have become significant resources in the international computational linguistics community. To derive maximal benefit from the semantic information provided by these resources, the MASC will also include manually-validated shallow parses and named entities, which will enable linking WordNet senses and FrameNet frames within the same sentences into more complex semantic structures and, because named entities will often be the role fillers of FrameNet frames, enrich the semantic and pragmatic information derivable from the sub-corpus. All MASC annotations will be published with detailed inter-annotator agreement measures. The MASC and its annotations will be freely downloadable from the ANC website, thus providing maximum accessibility for researchers from around the globe.

pdf bib
Relation between Agreement Measures on Human Labeling and Machine Learning Performance: Results from an Art History Domain
Rebecca Passonneau | Tom Lippincott | Tae Yano | Judith Klavans
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

We discuss factors that affect human agreement on a semantic labeling task in the art history domain, based on the results of four experiments where we varied the number of labels annotators could assign, the number of annotators, the type and amount of training they received, and the size of the text span being labeled. Using the labelings from one experiment involving seven annotators, we investigate the relation between interannotator agreement and machine learning performance. We construct binary classifiers and vary the training and test data by swapping the labelings from the seven annotators. First, we find performance is often quite good despite lower than recommended interannotator agreement. Second, we find that on average, learning performance for a given functional semantic category correlates with the overall agreement among the seven annotators for that category. Third, we find that learning performance on the data from a given annotator does not correlate with the quality of that annotator’s labeling. We offer recommendations for the use of labeled data in machine learning, and argue that learners should attempt to accommodate human variation. We also note implications for large scale corpus annotation projects that deal with similarly subjective phenomena.

2007

pdf bib
Measuring Variability in Sentence Ordering for News Summarization
Nitin Madnani | Rebecca Passonneau | Necip Fazil Ayan | John Conroy | Bonnie Dorr | Judith Klavans | Dianne O’Leary | Judith Schlesinger
Proceedings of the Eleventh European Workshop on Natural Language Generation (ENLG 07)

2006

pdf bib
Inter-annotator Agreement on a Multilingual Semantic Annotation Task
Rebecca Passonneau | Nizar Habash | Owen Rambow
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

Six sites participated in the Interlingual Annotation of Multilingual Text Corpora (IAMTC) project (Dorr et al., 2004; Farwell et al., 2004; Mitamura et al., 2004). Parsed versions of English translations of news articles in Arabic, French, Hindi, Japanese, Korean and Spanish were annotated by up to ten annotators. Their task was to match open-class lexical items (nouns, verbs, adjectives, adverbs) to one or more concepts taken from the Omega ontology (Philpot et al., 2003), and to identify theta roles for verb arguments. The annotated corpus is intended to be a resource for meaning-based approaches to machine translation. Here we discuss inter-annotator agreement for the corpus. The annotation task is characterized by annotators’ freedom to select multiple concepts or roles per lexical item. As a result, the annotation categories are sets, the number of which is bounded only by the number of distinct annotator-lexical item pairs. We use a reliability metric designed to handle partial agreement between sets. The best results pertain to the part of the ontology derived from WordNet. We examine change over the course of the project, differences among annotators, and differences across parts of speech. Our results suggest a strong learning effect early in the project.

pdf bib
Measuring Agreement on Set-valued Items (MASI) for Semantic and Pragmatic Annotation
Rebecca Passonneau
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

Annotation projects dealing with complex semantic or pragmatic phenomena face the dilemma of creating annotation schemes that oversimplify the phenomena, or that capture distinctions conventional reliability metrics cannot measure adequately. The solution to the dilemma is to develop metrics that quantify the decisions that annotators are asked to make. This paper discusses MASI, distance metric for comparing sets, and illustrates its use in quantifying the reliability of a specific dataset. Annotations of Summary Content Units (SCUs) generate models referred to as pyramids which can be used to evaluate unseen human summaries or machine summaries. The paper presents reliability results for five pairs of pyramids created for document sets from the 2003 Document Understanding Conference (DUC). The annotators worked independently of each other. Differences between application of MASI to pyramid annotation and its previous application to co-reference annotation are discussed. In addition, it is argued that a paradigmatic reliability study should relate measures of inter-annotator agreement to independent assessments, such as significance tests of the annotated variables with respect to other phenomena. In effect, what counts as sufficiently reliable intera-annotator agreement depends on the use the annotated data will be put to.

pdf bib
CLiMB ToolKit: A Case Study of Iterative Evaluation in a Multidisciplinary Project
Rebecca Passonneau | Roberta Blitz | David Elson | Angela Giral | Judith Klavans
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

Digital image collections in libraries and other curatorial institutions grow too rapidly to create new descriptive metadata for subject matter search or browsing. CLiMB (Computational Linguistics for Metadata Building) was a project designed to address this dilemma that involved computer scientists, linguists, librarians, and art librarians. The CLiMB project followed an iterative evaluation model: each next phase of the project emerged from the results of an evaluation. After assembling a suite of text processing tools to be used in extracting metada, we conducted a formative evaluation with thirteen participants, using a survey in which we varied the order and type of four conditions under which respondents would propose or select image search terms. Results of the formative evaluation led us to conclude that a CLiMB ToolKit would work best if its main function was to propose terms for users to review. After implementing a prototype ToolKit using a browser interface, we conducted an evaluation with ten experts. Users found the ToolKit very habitable, remained consistently satisfied throughout a lengthy evaluation, and selected a large number of terms per image.

2004

pdf bib
Evaluating Content Selection in Summarization: The Pyramid Method
Ani Nenkova | Rebecca Passonneau
Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004

pdf bib
Computing Reliability for Coreference Annotation
Rebecca J. Passonneau
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

2001

pdf bib
Quantitative and Qualitative Evaluation of Darpa Communicator Spoken Dialogue Systems
Marilyn A. Walker | Rebecca Passonneau | Julie E. Boland
Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics

pdf bib
DATE: A Dialogue Act Tagging Scheme for Evaluation of Spoken Dialogue Systems
Marilyn Walker | Rebecca Passonneau
Proceedings of the First International Conference on Human Language Technology Research

1997

pdf bib
Discourse Segmentation by Human and Automated Means
Rebecca J. Passonneau | Diane J. Litman
Computational Linguistics, Volume 23, Number 1, March 1997

pdf bib
Investigating Complementary Methods for Verb Sense Pruning
Hongyan Jing | Vasileios Hatzivassiloglou | Rebecca Passonneau | Kathleen McKeown
Tagging Text with Lexical Semantics: Why, What, and How?

pdf bib
Software Re-Use and Evolution in Text Generation Applications
Karen Kukich | Rebecca Passonneau | Kathleen McKeown | Dragomir Radev | Vasileios Hatzivassiloglou | Hongyan Jing
From Research to Commercial Applications: Making NLP Work in Practice

1996

pdf bib
Book Reviews: Representing Time in Natural Language: The Dynamic Interpretation of Tense and Aspect
Rebecca J. Passonneau
Computational Linguistics, Volume 22, Number 2, June 1996

1995

pdf bib
Combining Multiple Knowledge Sources for Discourse Segmentation
Diane J. Litman | Rebecca J. Passonneau
33rd Annual Meeting of the Association for Computational Linguistics

1994

pdf bib
Extracting Constraints on Word Usage from Large Text Corpora
Kathleen McKeown | Rebecca Passonneau
Human Language Technology: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994

1993

pdf bib
Extracting Constraints on Word Usage from Large Text Corpora
Kathleen McKeown | Rebecca Passonneau
Human Language Technology: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21-24, 1993

pdf bib
Empirical Evidence for Intention-Based Discourse Segmentation
Diane J. Litman | Rebecca J. Passonneau
Intentionality and Structure in Discourse Relations

pdf bib
Temporal Centering
Megumi Kameyama | Rebecca Passonneau | Massimo Poesio
31st Annual Meeting of the Association for Computational Linguistics

pdf bib
Intention-Based Segmentation: Human Reliability and Correlation With Linguistic Cues
Rebecca J. Passonneau | Diane J. Litman
31st Annual Meeting of the Association for Computational Linguistics

1992

pdf bib
Extracting Constraints on Word Usage from Large Text Corpora
Kathleen McKeown | Diane Litman | Rebecca Passonneau
Speech and Natural Language: Proceedings of a Workshop Held at Harriman, New York, February 23-26, 1992

1991

pdf bib
Some Facts About Centers, Indexicals, and Demonstratives
Rebecca J. Passonneau
29th Annual Meeting of the Association for Computational Linguistics

1989

pdf bib
Getting at Discourse Referents
Rebecca J. Passonneau
27th Annual Meeting of the Association for Computational Linguistics

1988

pdf bib
Sentence Fragments Regular Structures
Marcia C. Linebarger | Deborah A. Dahl | Lynette Hirschman | Rebecca J. Passonneau
26th Annual Meeting of the Association for Computational Linguistics

pdf bib
A Computational Model of the Semantics of Tense and Aspect
Rebecca J. Passonneau
Computational Linguistics, Volume 14, Number 2, June 1988

1987

pdf bib
Situations and Intervals
Rebecca J. Passonneau
25th Annual Meeting of the Association for Computational Linguistics

pdf bib
Nominalizations in PUNDIT
Deborah A. Dahl | Martha S. Palmer | Rebecca J. Passonneau
25th Annual Meeting of the Association for Computational Linguistics