Dan Jurafsky

Also published as: Daniel Jurafsky


2024

pdf bib
SumTablets: A Transliteration Dataset of Sumerian Tablets
Cole Simmons | Richard Diehl Martinez | Dan Jurafsky
Proceedings of the 1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)

Transliterating Sumerian is a key step in understanding Sumerian texts, but remains a difficult and time-consuming task. With more than 100,000 known texts and comparatively few specialists, manually maintaining up-to-date transliterations for the entire corpus is impractical. While many transliterations have been published online thanks to the dedicated effort of previous projects, the lack of a comprehensive, easily accessible dataset that pairs digital representations of source glyphs with their transliterations has hindered the application of natural language processing (NLP) methods to this task.To address this gap, we present SumTablets, the largest collection of Sumerian cuneiform tablets structured as Unicode glyph–transliteration pairs. Our dataset comprises 91,606 tablets (totaling 6,970,407 glyphs) with associated period and genre metadata. We release SumTablets as a Hugging Face Dataset.To construct SumTablets, we first preprocess and standardize publicly available transliterations. We then map them back to a Unicode representation of their source glyphs, retaining parallel structural information (e.g., surfaces, newlines, broken segments) through the use of special tokens.We leverage SumTablets to implement and evaluate two transliteration approaches: 1) weighted sampling from a glyph’s possible readings, 2) fine-tuning an autoregressive language model. Our fine-tuned language model achieves an average transliteration character-level F-score (chrF) of 97.55, demonstrating the potential use of deep learning methods in Assyriological research.

pdf bib
NLP Systems That Can’t Tell Use from Mention Censor Counterspeech, but Teaching the Distinction Helps
Kristina Gligoric | Myra Cheng | Lucia Zheng | Esin Durmus | Dan Jurafsky
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)

The use of words to convey speaker’s intent is traditionally distinguished from the ‘mention’ of words for quoting what someone said, or pointing out properties of a word. Here we show that computationally modeling this use-mention distinction is crucial for dealing with counterspeech online. Counterspeech that refutes problematic content often mentions harmful language but is not harmful itself (e.g., calling a vaccine dangerous is not the same as expressing disapproval of someone for calling vaccines dangerous). We show that even recent language models fail at distinguishing use from mention, and that this failure propagates to two key downstream tasks: misinformation and hate speech detection, resulting in censorship of counterspeech. We introduce prompting mitigations that teach the use-mention distinction, and show they reduce these errors. Our work highlights the importance of the use-mention distinction for NLP and CSS and offers ways to address it.

pdf bib
Grounding Gaps in Language Model Generations
Omar Shaikh | Kristina Gligoric | Ashna Khetan | Matthias Gerstgrasser | Diyi Yang | Dan Jurafsky
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)

Effective conversation requires common ground: a shared understanding between the participants. Common ground, however, does not emerge spontaneously in conversation. Speakers and listeners work together to both identify and construct a shared basis while avoiding misunderstanding. To accomplish grounding, humans rely on a range of dialogue acts, like clarification (What do you mean?) and acknowledgment (I understand.). However, it is unclear whether large language models (LLMs) generate text that reflects human grounding. To this end, we curate a set of grounding acts and propose corresponding metrics that quantify attempted grounding. We study whether LLM generations contain grounding acts, simulating turn-taking from several dialogue datasets and comparing results to humans. We find that—compared to humans—LLMs generate language with less conversational grounding, instead generating text that appears to simply presume common ground. To understand the roots of the identified grounding gap, we examine the role of instruction tuning and preference optimization, finding that training on contemporary preference data leads to a reduction in generated grounding acts. Altogether, we highlight the need for more research investigating conversational grounding in human-AI interaction.

pdf bib
AnthroScore: A Computational Linguistic Measure of Anthropomorphism
Myra Cheng | Kristina Gligoric | Tiziano Piccardi | Dan Jurafsky
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)

Anthropomorphism, or the attribution of human-like characteristics to non-human entities, has shaped conversations about the impacts and possibilities of technology. We present AnthroScore, an automatic metric of implicit anthropomorphism in language. We use a masked language model to quantify how non-human entities are implicitly framed as human by the surrounding context. We show that AnthroScore corresponds with human judgments of anthropomorphism and dimensions of anthropomorphism described in social science literature. Motivated by concerns of misleading anthropomorphism in computer science discourse, we use AnthroScore to analyze 15 years of research papers and downstream news articles. In research papers, we find that anthropomorphism has steadily increased over time, and that papers related to language models have the most anthropomorphism. Within ACL papers, temporal increases in anthropomorphism are correlated with key neural advancements. Building upon concerns of scientific misinformation in mass media, we identify higher levels of anthropomorphism in news headlines compared to the research papers they cite. Since AnthroScore is lexicon-free, it can be directly applied to a wide range of text sources.

pdf bib
Predicting positive transfer for improved low-resource speech recognition using acoustic pseudo-tokens
Nay San | Georgios Paraskevopoulos | Aryaman Arora | Xiluo He | Prabhjot Kaur | Oliver Adams | Dan Jurafsky
Proceedings of the 6th Workshop on Research in Computational Linguistic Typology and Multilingual NLP

While massively multilingual speech models like wav2vec 2.0 XLSR-128 can be directly fine-tuned for automatic speech recognition (ASR), downstream performance can still be relatively poor on languages that are under-represented in the pre-training data. Continued pre-training on 70–200 hours of untranscribed speech in these languages can help — but what about languages without that much recorded data? For such cases, we show that supplementing the target language with data from a similar, higher-resource ‘donor’ language can help. For example, continued pretraining on only 10 hours of low-resource Punjabi supplemented with 60 hours of donor Hindi is almost as good as continued pretraining on 70 hours of Punjabi. By contrast, sourcing supplemental data from less similar donors like Bengali does not improve ASR performance. To inform donor language selection, we propose a novel similarity metric based on the sequence distribution of induced acoustic units: the Acoustic Token Distribution Similarity (ATDS). Across a set of typologically different target languages (Punjabi, Galician, Iban, Setswana), we show that the ATDS between the target language and its candidate donors precisely predicts target language ASR performance.

pdf bib
CausalGym: Benchmarking causal interpretability methods on linguistic tasks
Aryaman Arora | Dan Jurafsky | Christopher Potts
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Language models (LMs) have proven to be powerful tools for psycholinguistic research, but most prior work has focused on purely behavioural measures (e.g., surprisal comparisons). At the same time, research in model interpretability has begun to illuminate the abstract causal mechanisms shaping LM behavior. To help bring these strands of research closer together, we introduce CausalGym. We adapt and expand the SyntaxGym suite of tasks to benchmark the ability of interpretability methods to causally affect model behaviour. To illustrate how CausalGym can be used, we study the pythia models (14M–6.9B) and assess the causal efficacy of a wide range of interpretability methods, including linear probing and distributed alignment search (DAS). We find that DAS outperforms the other methods, and so we use it to study the learning trajectory of two difficult linguistic phenomena in pythia-1b: negative polarity item licensing and filler–gap dependencies. Our analysis shows that the mechanism implementing both of these tasks is learned in discrete stages, not gradually.

pdf bib
string2string: A Modern Python Library for String-to-String Algorithms
Mirac Suzgun | Stuart Shieber | Dan Jurafsky
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)

We introduce **string2string**, an open-source library that offers a comprehensive suite of efficient algorithms for a broad range of string-to-string problems. It includes traditional algorithmic solutions as well as recent advanced neural approaches to tackle various problems in string alignment, distance measurement, lexical and semantic search, and similarity analysis�along with several helpful visualization tools and metrics to facilitate the interpretation and analysis of these methods. Notable algorithms featured in the library include the Smith-Waterman algorithm for pairwise local alignment, the Hirschberg algorithm for global alignment, the Wagner-Fischer algorithm for edit distance, BARTScore and BERTScore for similarity analysis, the Knuth-Morris-Pratt algorithm for lexical search, and Faiss for semantic search. In addition, it wraps existing efficient and widely-used implementations of certain frameworks and metrics, such as sacreBLEU and ROUGE. Overall, the library aims to provide extensive coverage and increased flexibility in comparison to existing libraries for strings. It can be used for many downstream applications, tasks, and problems in natural-language processing, bioinformatics, and computational social sciences. It is implemented in Python, easily installable via pip, and accessible through a simple API. Source code, documentation, and tutorials are all available on our GitHub page: https://github.com/stanfordnlp/string2string* Documentation: https://string2string.readthedocs.io/en/latest/* GitHub page: https://github.com/stanfordnlp/string2string* Short video: https://drive.google.com/file/d/1IT-pBACDVUoEHewk__5Pz5mU5oAMq5k_/view?usp=sharing

2023

pdf bib
Making More of Little Data: Improving Low-Resource Automatic Speech Recognition Using Data Augmentation
Martijn Bartelds | Nay San | Bradley McDonnell | Dan Jurafsky | Martijn Wieling
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

The performance of automatic speech recognition (ASR) systems has advanced substantially in recent years, particularly for languages for which a large amount of transcribed speech is available. Unfortunately, for low-resource languages, such as minority languages, regional languages or dialects, ASR performance generally remains much lower. In this study, we investigate whether data augmentation techniques could help improve low-resource ASR performance, focusing on four typologically diverse minority languages or language variants (West Germanic: Gronings, West-Frisian; Malayo-Polynesian: Besemah, Nasal). For all four languages, we examine the use of self-training, where an ASR system trained with the available human-transcribed data is used to generate transcriptions, which are then combined with the original data to train a new ASR system. For Gronings, for which there was a pre-existing text-to-speech (TTS) system available, we also examined the use of TTS to generate ASR training data from text-only sources. We find that using a self-training approach consistently yields improved performance (a relative WER reduction up to 20.5% compared to using an ASR system trained on 24 minutes of manually transcribed speech). The performance gain from TTS augmentation for Gronings was even stronger (up to 25.5% relative reduction in WER compared to a system based on 24 minutes of manually transcribed speech). In sum, our results show the benefit of using self-training or (if possible) TTS-generated data as an efficient solution to overcome the limitations of data availability for resource-scarce languages in order to improve ASR performance.

pdf bib
Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models
Myra Cheng | Esin Durmus | Dan Jurafsky
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

To recognize and mitigate harms from large language models (LLMs), we need to understand the prevalence and nuances of stereotypes in LLM outputs. Toward this end, we present Marked Personas, a prompt-based method to measure stereotypes in LLMs for intersectional demographic groups without any lexicon or data labeling. Grounded in the sociolinguistic concept of markedness (which characterizes explicitly linguistically marked categories versus unmarked defaults), our proposed method is twofold: 1) prompting an LLM to generate personas, i.e., natural language descriptions, of the target demographic group alongside personas of unmarked, default groups; 2) identifying the words that significantly distinguish personas of the target group from corresponding unmarked ones. We find that the portrayals generated by GPT-3.5 and GPT-4 contain higher rates of racial stereotypes than human-written portrayals using the same prompts. The words distinguishing personas of marked (non-white, non-male) groups reflect patterns of othering and exoticizing these demographics. An intersectional lens further reveals tropes that dominate portrayals of marginalized groups, such as tropicalism and the hypersexualization of minoritized women. These representational harms have concerning implications for downstream applications like story generation.

pdf bib
Multilingual BERT has an accent: Evaluating English influences on fluency in multilingual models
Isabel Papadimitriou | Kezia Lopez | Dan Jurafsky
Findings of the Association for Computational Linguistics: EACL 2023

While multilingual language models can improve NLP performance on low-resource languages by leveraging higher-resource languages, they also reduce average performance on all languages (the ‘curse of multilinguality’). Here we show another problem with multilingual models: grammatical structures in higher-resource languages bleed into lower-resource languages, a phenomenon we call grammatical structure bias. We show this bias via a novel method for comparing the fluency of multilingual models to the fluency of monolingual Spanish and Greek models: testing their preference for two carefully-chosen variable grammatical structures (optional pronoun-drop in Spanish and optional Subject-Verb ordering in Greek). We find that multilingual BERT is biased toward the English-like setting (explicit pronouns and Subject-Verb-Object ordering) as compared to our monolingual control language model. With our case studies, we hope to bring to light the fine-grained ways in which multilingual models can be biased, and encourage more linguistically-aware fluency evaluation.

pdf bib
Mini But Mighty: Efficient Multilingual Pretraining with Linguistically-Informed Data Selection
Tolulope Ogunremi | Dan Jurafsky | Christopher Manning
Findings of the Association for Computational Linguistics: EACL 2023

With the prominence of large pretrained language models, low-resource languages are rarely modelled monolingually and become victims of the “curse of multilinguality” in massively multilingual models. Recently, AfriBERTa showed that training transformer models from scratch on 1GB of data from many unrelated African languages outperforms massively multilingual models on downstream NLP tasks. Here we extend this direction, focusing on the use of related languages. We propose that training on smaller amounts of data but from related languages could match the performance of models trained on large, unrelated data. We test our hypothesis on the Niger-Congo family and its Bantu and Volta-Niger sub-families, pretraining models with data solely from Niger-Congo languages and finetuning on 4 downstream tasks: NER, part-of-speech tagging, sentiment analysis and text classification. We find that models trained on genetically related languages achieve equal performance on downstream tasks in low-resource languages despite using less training data. We recommend selecting training data based on language-relatedness when pretraining language models for low-resource languages.

pdf bib
Follow the Wisdom of the Crowd: Effective Text Generation via Minimum Bayes Risk Decoding
Mirac Suzgun | Luke Melas-Kyriazi | Dan Jurafsky
Findings of the Association for Computational Linguistics: ACL 2023

In open-ended natural-language generation, existing text decoding methods typically struggle to produce text which is both diverse and high-quality. Greedy and beam search are known to suffer from text degeneration and linguistic diversity issues, while temperature, top-k, and nucleus sampling yield diverse but often lower-quality outputs. In this work, we build upon Minimum Bayes Risk Decoding (MBRD), a family of decoding methods based on Bayesian risk minimization, to address this diversity-quality trade-off. Inspired by the principle of the wisdom of the crowd, MBRD seeks to select a candidate from a pool of candidates that has the least expected risk under a generative model according to a given utility function. The crowd of candidates serves as an approximation for the distribution over human-generated references. We show that MBRD generalizes numerous decoding methods, including majority voting, and can be used as a drop-in replacement for existing sampling methods. Across a wide range of tasks—such as summarization, data-to-text, translation, and textual style transfer—MBRD yields 3-7 ROUGE and BLEU point improvements, including state-of-the-art results on WebNLG and WMT’16.

pdf bib
Injecting structural hints: Using language models to study inductive biases in language learning
Isabel Papadimitriou | Dan Jurafsky
Findings of the Association for Computational Linguistics: EMNLP 2023

Both humans and transformer language models are able to learn language without explicit structural supervision. What cognitive inductive biases make this learning possible? Here, we examine the effect of different inductive learning biases by actively controlling the inductive biases of artificial learners: we structurally bias models by pretraining on synthetic formally-structured data, and evaluate these structural biases by fine-tuning on three typologically-distant human languages: English, Japanese, and Basque. We investigate the effect on downstream language perplexity of three types of inductive bias: 1) recursive, hierarchical processing 2) unrestricted token-token dependencies that can’t be modeled by context-free grammars, and 3) a Zipfian power-law vocabulary distribution. We show that complex, non-context-free interactions between tokens form the best inductive biases. Our study leverages the capabilities of transformer models to run controlled language learning experiments that are not possible to run on humans, and surfaces hypotheses about the structures that facilitate language learning in both humans and machines.

pdf bib
Navigating the Grey Area: How Expressions of Uncertainty and Overconfidence Affect Language Models
Kaitlyn Zhou | Dan Jurafsky | Tatsunori Hashimoto
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

The increased deployment of LMs for real-world tasks involving knowledge and facts makes it important to understand model epistemology: what LMs think they know, and how their attitudes toward that knowledge are affected by language use in their inputs. Here, we study an aspect of model epistemology: how epistemic markers of certainty, uncertainty, or evidentiality like “I’m sure it’s”, “I think it’s”, or “Wikipedia says it’s” affect models, and whether they contribute to model failures. We develop a typology of epistemic markers and inject 50 markers into prompts for question answering. We find that LMs are highly sensitive to epistemic markers in prompts, with accuracies varying more than 80%. Surprisingly, we find that expressions of high certainty result in a 7% decrease in accuracy as compared to low certainty expressions; similarly, factive verbs hurt performance, while evidentials benefit performance. Our analysis of a popular pretraining dataset shows that these markers of uncertainty are associated with answers on question-answering websites, while markers of certainty are associated with questions. These associations may suggest that the behavior of LMs is based on mimicking observed language use, rather than truly reflecting epistemic uncertainty.

pdf bib
Leveraging supplementary text data to kick-start automatic speech recognition system development with limited transcriptions
Nay San | Martijn Bartelds | Blaine Billings | Ella de Falco | Hendi Feriza | Johan Safri | Wawan Sahrozi | Ben Foley | Bradley McDonnell | Dan Jurafsky
Proceedings of the Sixth Workshop on the Use of Computational Methods in the Study of Endangered Languages

pdf bib
When Do Pre-Training Biases Propagate to Downstream Tasks? A Case Study in Text Summarization
Faisal Ladhak | Esin Durmus | Mirac Suzgun | Tianyi Zhang | Dan Jurafsky | Kathleen McKeown | Tatsunori Hashimoto
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics

Large language models (LLMs) are subject to sociocultural and other biases previously identified using intrinsic evaluations. However, when and how these intrinsic biases in pre-trained LM representations propagate to downstream, fine-tuned NLP tasks like summarization is not well understood. In this work, we investigate one type of bias—name-nationality bias—and trace it from the pre-training stage to a downstream summarization task across multiple summarization modeling choices. We show that these biases manifest themselves as hallucinations in summarization, leading to factually incorrect summaries. We also find that this propagation of biases is algorithm-dependent: more abstractive models allow biases to propagate more directly to downstream tasks as hallucinated facts. Building on these observations, we further analyze how changes to the adaptation method and fine-tuning data set affect name nationality biases and show that while they can reduce the overall rate of hallucinations, they do not change the types of biases that do appear.

pdf bib
Multilingual BERT has an Accent: Evaluating English Influences on Fluency in Multilingual Models
Isabel Papadimitriou | Kezia Lopez | Dan Jurafsky
Proceedings of the 5th Workshop on Research in Computational Linguistic Typology and Multilingual NLP

While multilingual language models can improve NLP performance on low-resource languages by leveraging higher-resource languages, they also reduce average performance on all languages (the ‘curse of multilinguality’). Here we show another problem with multilingual models: grammatical structures in higher-resource languages bleed into lower-resource languages, a phenomenon we call grammatical structure bias. We show this bias via a novel method for comparing the fluency of multilingual models to the fluency of monolingual Spanish and Greek models: testing their preference for two carefully-chosen variable grammatical structures (optional pronoun-drop in Spanish and optional Subject-Verb ordering in Greek). We find that multilingual BERT is biased toward the English-like setting (explicit pronouns and Subject-Verb-Object ordering) and against the default Spanish and Gerek settings, as compared to our monolingual control language model. With our case studies, we hope to bring to light the fine-grained ways in which multilingual models can be biased, and encourage more linguistically-aware fluency evaluation.

pdf bib
Multilingual self-supervised speech representations improve the speech recognition of low-resource African languages with codeswitching
Tolulope Ogunremi | Christopher Manning | Dan Jurafsky
Proceedings of the 6th Workshop on Computational Approaches to Linguistic Code-Switching

While many speakers of low-resource languages regularly code-switch between their languages and other regional languages or English, datasets of codeswitched speech are too small to train bespoke acoustic models from scratch or do language model rescoring. Here we propose finetuning self-supervised speech representations such as wav2vec 2.0 XLSR to recognize code-switched data. We find that finetuning self-supervised multilingual representations and augmenting them with n-gram language models trained from transcripts reduces absolute word error rates by up to 20% compared to baselines of hybrid models trained from scratch on code-switched data. Our findings suggest that in circumstances with limited training data finetuning self-supervised representations is a better performing and viable solution.

2022

pdf bib
Problems with Cosine as a Measure of Embedding Similarity for High Frequency Words
Kaitlyn Zhou | Kawin Ethayarajh | Dallas Card | Dan Jurafsky
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Cosine similarity of contextual embeddings is used in many NLP tasks (e.g., QA, IR, MT) and metrics (e.g., BERTScore). Here, we uncover systematic ways in which word similarities estimated by cosine over BERT embeddings are understated and trace this effect to training data frequency. We find that relative to human judgements, cosine similarity underestimates the similarity of frequent words with other instances of the same word or other words across contexts, even after controlling for polysemy and other factors. We conjecture that this underestimation of similarity for high frequency words is due to differences in the representational geometry of high and low frequency words and provide a formal argument for the two-dimensional case.

pdf bib
Prompt-and-Rerank: A Method for Zero-Shot and Few-Shot Arbitrary Textual Style Transfer with Small Language Models
Mirac Suzgun | Luke Melas-Kyriazi | Dan Jurafsky
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

We propose a method for arbitrary textual style transfer (TST)—the task of transforming a text into any given style—utilizing general-purpose pre-trained language models. Our method, Prompt-and-Rerank, is based on a mathematical formulation of the TST task, decomposing it into three constituent components: textual similarity, target style strength, and fluency. Our method uses zero-shot or few-shot prompting to obtain a set of candidate generations in the target style, and then re-ranks them according to the three components. Our method enables small pre-trained language models to perform on par with state-of-the-art large-scale models while using two orders of magnitude less compute and memory. We also investigate the effect of model size and prompt design (e.g., prompt paraphrasing and delimiter-pair choice) on style transfer quality across seven diverse textual style transfer datasets, finding, among other things, that delimiter-pair choice has a large impact on performance, and that models have biases on the direction of style transfer.

pdf bib
The Authenticity Gap in Human Evaluation
Kawin Ethayarajh | Dan Jurafsky
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Human ratings are the gold standard in NLG evaluation. The standard protocol is to collect ratings of generated text, average across annotators, and rank NLG systems by their average scores. However, little consideration has been given as to whether this approach faithfully captures human preferences. Analyzing this standard protocol through the lens of utility theory in economics, we identify the implicit assumptions it makes about annotators. These assumptions are often violated in practice, in which case annotator ratings cease to reflect their preferences. The most egregious violations come from using Likert scales, which provably reverse the direction of the true preference in certain cases. We suggest improvements to the standard protocol to make it more theoretically sound, but even in its improved form, it cannot be used to evaluate open-ended tasks like story generation. For the latter, we propose a new human evaluation protocol called system-level probabilistic assessment (SPA). When human evaluation of stories is done with SPA, we can recover the ordering of GPT-3 models by size, with statistically significant results. However, when human evaluation is done with the standard protocol, less than half of the expected preferences can be recovered (e.g., there is no significant difference between curie and davinci, despite using a highly powered test).

pdf bib
Richer Countries and Richer Representations
Kaitlyn Zhou | Kawin Ethayarajh | Dan Jurafsky
Findings of the Association for Computational Linguistics: ACL 2022

We examine whether some countries are more richly represented in embedding space than others. We find that countries whose names occur with low frequency in training corpora are more likely to be tokenized into subwords, are less semantically distinct in embedding space, and are less likely to be correctly predicted: e.g., Ghana (the correct answer and in-vocabulary) is not predicted for, “The country producing the most cocoa is [MASK].”. Although these performance discrepancies and representational harms are due to frequency, we find that frequency is highly correlated with a country’s GDP; thus perpetuating historic power and wealth inequalities. We analyze the effectiveness of mitigation strategies; recommend that researchers report training word frequencies; and recommend future work for the community to define and design representational guarantees.

pdf bib
Modular Domain Adaptation
Junshen Chen | Dallas Card | Dan Jurafsky
Findings of the Association for Computational Linguistics: ACL 2022

Off-the-shelf models are widely used by computational social science researchers to measure properties of text, such as sentiment. However, without access to source data it is difficult to account for domain shift, which represents a threat to validity. Here, we treat domain adaptation as a modular process that involves separate model producers and model consumers, and show how they can independently cooperate to facilitate more accurate measurements of text. We introduce two lightweight techniques for this scenario, and demonstrate that they reliably increase out-of-domain accuracy on four multi-domain text classification datasets when used with linear and contextual embedding models. We conclude with recommendations for model producers and consumers, and release models and replication code to accompany this paper.

pdf bib
Computationally Identifying Funneling and Focusing Questions in Classroom Discourse
Sterling Alic | Dorottya Demszky | Zid Mancenido | Jing Liu | Heather Hill | Dan Jurafsky
Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)

Responsive teaching is a highly effective strategy that promotes student learning. In math classrooms, teachers might funnel students towards a normative answer or focus students to reflect on their own thinking depending their understanding of math concepts. When teachers focus, they treat students’ contributions as resources for collective sensemaking, and thereby significantly improve students’ achievement and confidence in mathematics. We propose the task of computationally detecting funneling and focusing questions in classroom discourse. We do so by creating and releasing an annotated dataset of 2,348 teacher utterances labeled for funneling and focusing questions, or neither. We introduce supervised and unsupervised approaches to differentiating these questions. Our best model, a supervised RoBERTa model fine-tuned on our dataset, has a strong linear correlation of .76 with human expert labels and with positive educational outcomes, including math instruction quality and student achievement, showing the model’s potential for use in automated teacher feedback tools. Our unsupervised measures show significant but weaker correlations with human labels and outcomes, and they highlight interesting linguistic patterns of funneling and focusing questions. The high performance of the supervised measure indicates its promise for supporting teachers in their instruction.

pdf bib
Automated speech tools for helping communities process restricted-access corpora for language revival efforts
Nay San | Martijn Bartelds | Tolulope Ogunremi | Alison Mount | Ruben Thompson | Michael Higgins | Roy Barker | Jane Simpson | Dan Jurafsky
Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages

Many archival recordings of speech from endangered languages remain unannotated and inaccessible to community members and language learning programs. One bottleneck is the time-intensive nature of annotation. An even narrower bottleneck occurs for recordings with access constraints, such as language that must be vetted or filtered by authorised community members before annotation can begin. We propose a privacy-preserving workflow to widen both bottlenecks for recordings where speech in the endangered language is intermixed with a more widely-used language such as English for meta-linguistic commentary and questions (e.g.What is the word for ‘tree’?). We integrate voice activity detection (VAD), spoken language identification (SLI), and automatic speech recognition (ASR) to transcribe the metalinguistic content, which an authorised person can quickly scan to triage recordings that can be annotated by people with lower levels of access. We report work-in-progress processing 136 hours archival audio containing a mix of English and Muruwari. Our collaborative work with the Muruwari custodian of the archival materials show that this workflow reduces metalanguage transcription time by 20% even given only minimal amounts of annotated training data, 10 utterances per language for SLI and for ASR at most 39 minutes, and possibly as little as 39 seconds.

2021

pdf bib
Measuring Conversational Uptake: A Case Study on Student-Teacher Interactions
Dorottya Demszky | Jing Liu | Zid Mancenido | Julie Cohen | Heather Hill | Dan Jurafsky | Tatsunori Hashimoto
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

In conversation, uptake happens when a speaker builds on the contribution of their interlocutor by, for example, acknowledging, repeating or reformulating what they have said. In education, teachers’ uptake of student contributions has been linked to higher student achievement. Yet measuring and improving teachers’ uptake at scale is challenging, as existing methods require expensive annotation by experts. We propose a framework for computationally measuring uptake, by (1) releasing a dataset of student-teacher exchanges extracted from US math classroom transcripts annotated for uptake by experts; (2) formalizing uptake as pointwise Jensen-Shannon Divergence (pJSD), estimated via next utterance classification; (3) conducting a linguistically-motivated comparison of different unsupervised measures and (4) correlating these measures with educational outcomes. We find that although repetition captures a significant part of uptake, pJSD outperforms repetition-based baselines, as it is capable of identifying a wider range of uptake phenomena like question answering and reformulation. We apply our uptake measure to three different educational datasets with outcome indicators. Unlike baseline measures, pJSD correlates significantly with instruction quality in all three, providing evidence for its generalizability and for its potential to serve as an automated professional development tool for teachers.

pdf bib
Attention Flows are Shapley Value Explanations
Kawin Ethayarajh | Dan Jurafsky
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

Shapley Values, a solution to the credit assignment problem in cooperative game theory, are a popular type of explanation in machine learning, having been used to explain the importance of features, embeddings, and even neurons. In NLP, however, leave-one-out and attention-based explanations still predominate. Can we draw a connection between these different methods? We formally prove that — save for the degenerate case — attention weights and leave-one-out values cannot be Shapley Values. Attention flow is a post-processed variant of attention weights obtained by running the max-flow algorithm on the attention graph. Perhaps surprisingly, we prove that attention flows are indeed Shapley Values, at least at the layerwise level. Given the many desirable theoretical qualities of Shapley Values — which has driven their adoption among the ML community — we argue that NLP practitioners should, when possible, adopt attention flow explanations alongside more traditional ones.

pdf bib
Sensitivity as a Complexity Measure for Sequence Classification Tasks
Michael Hahn | Dan Jurafsky | Richard Futrell
Transactions of the Association for Computational Linguistics, Volume 9

We introduce a theoretical framework for understanding and predicting the complexity of sequence classification tasks, using a novel extension of the theory of Boolean function sensitivity. The sensitivity of a function, given a distribution over input sequences, quantifies the number of disjoint subsets of the input sequence that can each be individually changed to change the output. We argue that standard sequence classification methods are biased towards learning low-sensitivity functions, so that tasks requiring high sensitivity are more difficult. To that end, we show analytically that simple lexical classifiers can only express functions of bounded sensitivity, and we show empirically that low-sensitivity functions are easier to learn for LSTMs. We then estimate sensitivity on 15 NLP tasks, finding that sensitivity is higher on challenging tasks collected in GLUE than on simple text classification tasks, and that sensitivity predicts the performance both of simple lexical classifiers and of vanilla BiLSTMs without pretrained contextualized embeddings. Within a task, sensitivity predicts which inputs are hard for such simple models. Our results suggest that the success of massively pretrained contextual representations stems in part because they provide representations from which information can be extracted by low-sensitivity decoders.

pdf bib
Causal Effects of Linguistic Properties
Reid Pryzant | Dallas Card | Dan Jurafsky | Victor Veitch | Dhanya Sridhar
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

We consider the problem of using observational data to estimate the causal effects of linguistic properties. For example, does writing a complaint politely lead to a faster response time? How much will a positive product review increase sales? This paper addresses two technical challenges related to the problem before developing a practical method. First, we formalize the causal quantity of interest as the effect of a writer’s intent, and establish the assumptions necessary to identify this from observational data. Second, in practice, we only have access to noisy proxies for the linguistic properties of interest—e.g., predictions from classifiers and lexicons. We propose an estimator for this setting and prove that its bias is bounded when we perform an adjustment for the text. Based on these results, we introduce TextCause, an algorithm for estimating causal effects of linguistic properties. The method leverages (1) distant supervision to improve the quality of noisy proxies, and (2) a pre-trained language model (BERT) to adjust for the text. We show that the proposed method outperforms related approaches when estimating the effect of Amazon review sentiment on semi-simulated sales figures. Finally, we present an applied case study investigating the effects of complaint politeness on bureaucratic response times.

pdf bib
Improving Factual Completeness and Consistency of Image-to-Text Radiology Report Generation
Yasuhide Miura | Yuhao Zhang | Emily Tsai | Curtis Langlotz | Dan Jurafsky
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Neural image-to-text radiology report generation systems offer the potential to improve radiology reporting by reducing the repetitive process of report drafting and identifying possible medical errors. However, existing report generation systems, despite achieving high performances on natural language generation metrics such as CIDEr or BLEU, still suffer from incomplete and inconsistent generations. Here we introduce two new simple rewards to encourage the generation of factually complete and consistent radiology reports: one that encourages the system to generate radiology domain entities consistent with the reference, and one that uses natural language inference to encourage these entities to be described in inferentially consistent ways. We combine these with the novel use of an existing semantic equivalence metric (BERTScore). We further propose a report generation system that optimizes these rewards via reinforcement learning. On two open radiology report datasets, our system substantially improved the F1 score of a clinical information extraction performance by +22.1 (Delta +63.9%). We further show via a human evaluation and a qualitative analysis that our system leads to generations that are more factually complete and consistent compared to the baselines.

pdf bib
The Emergence of the Shape Bias Results from Communicative Efficiency
Eva Portelance | Michael C. Frank | Dan Jurafsky | Alessandro Sordoni | Romain Laroche
Proceedings of the 25th Conference on Computational Natural Language Learning

By the age of two, children tend to assume that new word categories are based on objects’ shape, rather than their color or texture; this assumption is called the shape bias. They are thought to learn this bias by observing that their caregiver’s language is biased towards shape based categories. This presents a chicken and egg problem: if the shape bias must be present in the language in order for children to learn it, how did it arise in language in the first place? In this paper, we propose that communicative efficiency explains both how the shape bias emerged and why it persists across generations. We model this process with neural emergent language agents that learn to communicate about raw pixelated images. First, we show that the shape bias emerges as a result of efficient communication strategies employed by agents. Second, we show that pressure brought on by communicative need is also necessary for it to persist across generations; simply having a shape bias in an agent’s input language is insufficient. These results suggest that, over and above the operation of other learning strategies, the shape bias in human learners may emerge and be sustained by communicative pressures.

pdf bib
Focus on what matters: Applying Discourse Coherence Theory to Cross Document Coreference
William Held | Dan Iter | Dan Jurafsky
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Performing event and entity coreference resolution across documents vastly increases the number of candidate mentions, making it intractable to do the full n2 pairwise comparisons. Existing approaches simplify by considering coreference only within document clusters, but this fails to handle inter-cluster coreference, common in many applications. As a result cross-document coreference algorithms are rarely applied to downstream tasks. We draw on an insight from discourse coherence theory: potential coreferences are constrained by the reader’s discourse focus. We model the entities/events in a reader’s focus as a neighborhood within a learned latent embedding space which minimizes the distance between mentions and the centroids of their gold coreference clusters. We then use these neighborhoods to sample only hard negatives to train a fine-grained classifier on mention pairs and their local discourse features. Our approach achieves state-of-the-art results for both events and entities on the ECB+, Gun Violence, Football Coreference, and Cross-Domain Cross-Document Coreference corpora. Furthermore, training on multiple corpora improves average performance across all datasets by 17.2 F1 points, leading to a robust coreference resolution model that is now feasible to apply to downstream tasks.

2020

pdf bib
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Dan Jurafsky | Joyce Chai | Natalie Schluter | Joel Tetreault
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

pdf bib
Pretraining with Contrastive Sentence Objectives Improves Discourse Performance of Language Models
Dan Iter | Kelvin Guu | Larry Lansing | Dan Jurafsky
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Recent models for unsupervised representation learning of text have employed a number of techniques to improve contextual word representations but have put little focus on discourse-level representations. We propose Conpono, an inter-sentence objective for pretraining language models that models discourse coherence and the distance between sentences. Given an anchor sentence, our model is trained to predict the text k sentences away using a sampled-softmax objective where the candidates consist of neighboring sentences and sentences randomly sampled from the corpus. On the discourse representation benchmark DiscoEval, our model improves over the previous state-of-the-art by up to 13% and on average 4% absolute across 7 tasks. Our model is the same size as BERT-Base, but outperforms the much larger BERT-Large model and other more recent approaches that incorporate discourse. We also show that Conpono yields gains of 2%-6% absolute even for tasks that do not explicitly evaluate discourse: textual entailment (RTE), common sense reasoning (COPA) and reading comprehension (ReCoRD).

pdf bib
Social Bias Frames: Reasoning about Social and Power Implications of Language
Maarten Sap | Saadia Gabriel | Lianhui Qin | Dan Jurafsky | Noah A. Smith | Yejin Choi
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Warning: this paper contains content that may be offensive or upsetting. Language has the power to reinforce stereotypes and project social biases onto others. At the core of the challenge is that it is rarely what is stated explicitly, but rather the implied meanings, that frame people’s judgments about others. For example, given a statement that “we shouldn’t lower our standards to hire more women,” most listeners will infer the implicature intended by the speaker - that “women (candidates) are less qualified.” Most semantic formalisms, to date, do not capture such pragmatic implications in which people express social biases and power differentials in language. We introduce Social Bias Frames, a new conceptual formalism that aims to model the pragmatic frames in which people project social biases and stereotypes onto others. In addition, we introduce the Social Bias Inference Corpus to support large-scale modelling and evaluation with 150k structured annotations of social media posts, covering over 34k implications about a thousand demographic groups. We then establish baseline approaches that learn to recover Social Bias Frames from unstructured text. We find that while state-of-the-art neural models are effective at high-level categorization of whether a given statement projects unwanted social bias (80% F1), they are not effective at spelling out more detailed explanations in terms of Social Bias Frames. Our study motivates future work that combines structured pragmatic inference with commonsense reasoning on social implications.

pdf bib
Detecting Stance in Media On Global Warming
Yiwei Luo | Dallas Card | Dan Jurafsky
Findings of the Association for Computational Linguistics: EMNLP 2020

Citing opinions is a powerful yet understudied strategy in argumentation. For example, an environmental activist might say, “Leading scientists agree that global warming is a serious concern,” framing a clause which affirms their own stance (“that global warming is serious”) as an opinion endorsed ("[scientists] agree”) by a reputable source (“leading”). In contrast, a global warming denier might frame the same clause as the opinion of an untrustworthy source with a predicate connoting doubt: “Mistaken scientists claim [...]." Our work studies opinion-framing in the global warming (GW) debate, an increasingly partisan issue that has received little attention in NLP. We introduce DeSMOG, a dataset of stance-labeled GW sentences, and train a BERT classifier to study novel aspects of argumentation in how different sides of a debate represent their own and each other’s opinions. From 56K news articles, we find that similar linguistic devices for self-affirming and opponent-doubting discourse are used across GW-accepting and skeptic media, though GW-skeptical media shows more opponent-doubt. We also find that authors often characterize sources as hypocritical, by ascribing opinions expressing the author’s own view to source entities known to publicly endorse the opposing view. We release our stance dataset, model, and lexicons of framing devices for future work on opinion-framing and the automatic detection of GW stance.

pdf bib
Utility is in the Eye of the User: A Critique of NLP Leaderboards
Kawin Ethayarajh | Dan Jurafsky
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Benchmarks such as GLUE have helped drive advances in NLP by incentivizing the creation of more accurate models. While this leaderboard paradigm has been remarkably successful, a historical focus on performance-based evaluation has been at the expense of other qualities that the NLP community values in models, such as compactness, fairness, and energy efficiency. In this opinion paper, we study the divergence between what is incentivized by leaderboards and what is useful in practice through the lens of microeconomic theory. We frame both the leaderboard and NLP practitioners as consumers and the benefit they get from a model as its utility to them. With this framing, we formalize how leaderboards – in their current form – can be poor proxies for the NLP community at large. For example, a highly inefficient model would provide less utility to practitioners but not to a leaderboard, since it is a cost that only the former must bear. To allow practitioners to better estimate a model’s utility to them, we advocate for more transparency on leaderboards, such as the reporting of statistics that are of practical concern (e.g., model size, energy efficiency, and inference latency).

pdf bib
Learning Music Helps You Read: Using Transfer to Study Linguistic Structure in Language Models
Isabel Papadimitriou | Dan Jurafsky
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

We propose transfer learning as a method for analyzing the encoding of grammatical structure in neural language models. We train LSTMs on non-linguistic data and evaluate their performance on natural language to assess which kinds of data induce generalizable structural features that LSTMs can use for natural language. We find that training on non-linguistic data with latent structure (MIDI music or Java code) improves test performance on natural language, despite no overlap in surface form or vocabulary. To pinpoint the kinds of abstract structure that models may be encoding to lead to this improvement, we run similar experiments with two artificial parentheses languages: one which has a hierarchical recursive structure, and a control which has paired tokens but no recursion. Surprisingly, training a model on either of these artificial languages leads the same substantial gains when testing on natural language. Further experiments on transfer between natural languages controlling for vocabulary overlap show that zero-shot performance on a test language is highly correlated with typological syntactic similarity to the training language, suggesting that representations induced by pre-training correspond to the cross-linguistic syntactic properties. Our results provide insights into the ways that neural models represent abstract syntactic structure, and also about the kind of structural inductive biases which allow for natural language acquisition.

pdf bib
With Little Power Comes Great Responsibility
Dallas Card | Peter Henderson | Urvashi Khandelwal | Robin Jia | Kyle Mahowald | Dan Jurafsky
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Despite its importance to experimental design, statistical power (the probability that, given a real effect, an experiment will reject the null hypothesis) has largely been ignored by the NLP community. Underpowered experiments make it more difficult to discern the difference between statistical noise and meaningful model improvements, and increase the chances of exaggerated findings. By meta-analyzing a set of existing NLP papers and datasets, we characterize typical power for a variety of settings and conclude that underpowered experiments are common in the NLP literature. In particular, for several tasks in the popular GLUE benchmark, small test sets mean that most attempted comparisons to state of the art models will not be adequately powered. Similarly, based on reasonable assumptions, we find that the most typical experimental design for human rating studies will be underpowered to detect small model differences, of the sort that are frequently studied. For machine translation, we find that typical test sets of 2000 sentences have approximately 75% power to detect differences of 1 BLEU point. To improve the situation going forward, we give an overview of best practices for power analysis in NLP and release a series of notebooks to assist with future power analyses.

2019

pdf bib
Analyzing Polarization in Social Media: Method and Application to Tweets on 21 Mass Shootings
Dorottya Demszky | Nikhil Garg | Rob Voigt | James Zou | Jesse Shapiro | Matthew Gentzkow | Dan Jurafsky
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

We provide an NLP framework to uncover four linguistic dimensions of political polarization in social media: topic choice, framing, affect and illocutionary force. We quantify these aspects with existing lexical methods, and propose clustering of tweet embeddings as a means to identify salient topics for analysis across events; human evaluations show that our approach generates more cohesive topics than traditional LDA-based models. We apply our methods to study 4.4M tweets on 21 mass shootings. We provide evidence that the discussion of these events is highly polarized politically and that this polarization is primarily driven by partisan differences in framing rather than topic choice. We identify framing devices, such as grounding and the contrasting use of the terms “terrorist” and “crazy”, that contribute to polarization. Results pertaining to topic choice, affect and illocutionary force suggest that Republicans focus more on the shooter and event-specific facts (news) while Democrats focus more on the victims and call for policy changes. Our work contributes to a deeper understanding of the way group divisions manifest in language and to computational methods for studying them.

pdf bib
Let’s Make Your Request More Persuasive: Modeling Persuasive Strategies via Semi-Supervised Neural Nets on Crowdfunding Platforms
Diyi Yang | Jiaao Chen | Zichao Yang | Dan Jurafsky | Eduard Hovy
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

Modeling what makes a request persuasive - eliciting the desired response from a reader - is critical to the study of propaganda, behavioral economics, and advertising. Yet current models can’t quantify the persuasiveness of requests or extract successful persuasive strategies. Building on theories of persuasion, we propose a neural network to quantify persuasiveness and identify the persuasive strategies in advocacy requests. Our semi-supervised hierarchical neural network model is supervised by the number of people persuaded to take actions and partially supervised at the sentence level with human-labeled rhetorical strategies. Our method outperforms several baselines, uncovers persuasive strategies - offering increased interpretability of persuasive speech - and has applications for other situations with document-level supervision but only partial sentence supervision.

pdf bib
Recursive Routing Networks: Learning to Compose Modules for Language Understanding
Ignacio Cases | Clemens Rosenbaum | Matthew Riemer | Atticus Geiger | Tim Klinger | Alex Tamkin | Olivia Li | Sandhini Agarwal | Joshua D. Greene | Dan Jurafsky | Christopher Potts | Lauri Karttunen
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

We introduce Recursive Routing Networks (RRNs), which are modular, adaptable models that learn effectively in diverse environments. RRNs consist of a set of functions, typically organized into a grid, and a meta-learner decision-making component called the router. The model jointly optimizes the parameters of the functions and the meta-learner’s policy for routing inputs through those functions. RRNs can be incorporated into existing architectures in a number of ways; we explore adding them to word representation layers, recurrent network hidden layers, and classifier layers. Our evaluation task is natural language inference (NLI). Using the MultiNLI corpus, we show that an RRN’s routing decisions reflect the high-level genre structure of that corpus. To show that RRNs can learn to specialize to more fine-grained semantic distinctions, we introduce a new corpus of NLI examples involving implicative predicates, and show that the model components become fine-tuned to the inferential signatures that are characteristic of these predicates.

pdf bib
Integrating Text and Image: Determining Multimodal Document Intent in Instagram Posts
Julia Kruk | Jonah Lubin | Karan Sikka | Xiao Lin | Dan Jurafsky | Ajay Divakaran
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Computing author intent from multimodal data like Instagram posts requires modeling a complex relationship between text and image. For example, a caption might evoke an ironic contrast with the image, so neither caption nor image is a mere transcript of the other. Instead they combine—via what has been called meaning multiplication (Bateman et al.)- to create a new meaning that has a more complex relation to the literal meanings of text and image. Here we introduce a multimodal dataset of 1299 Instagram posts labeled for three orthogonal taxonomies: the authorial intent behind the image-caption pair, the contextual relationship between the literal meanings of the image and caption, and the semiotic relationship between the signified meanings of the image and caption. We build a baseline deep multimodal classifier to validate the taxonomy, showing that employing both text and image improves intent detection by 9.6 compared to using only the image modality, demonstrating the commonality of non-intersective meaning multiplication. The gain with multimodality is greatest when the image and caption diverge semiotically. Our dataset offers a new resource for the study of the rich meanings that result from pairing text and image.

pdf bib
Neural Text Style Transfer via Denoising and Reranking
Joseph Lee | Ziang Xie | Cindy Wang | Max Drach | Dan Jurafsky | Andrew Ng
Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation

We introduce a simple method for text style transfer that frames style transfer as denoising: we synthesize a noisy corpus and treat the source style as a noisy version of the target style. To control for aspects such as preserving meaning while modifying style, we propose a reranking approach in the data synthesis phase. We evaluate our method on three novel style transfer tasks: transferring between British and American varieties, text genres (formal vs. casual), and lyrics from different musical genres. By measuring style transfer quality, meaning preservation, and the fluency of generated outputs, we demonstrate that our method is able both to produce high-quality output while maintaining the flexibility to suggest syntactically rich stylistic edits.

pdf bib
From Insanely Jealous to Insanely Delicious: Computational Models for the Semantic Bleaching of English Intensifiers
Yiwei Luo | Dan Jurafsky | Beth Levin
Proceedings of the 1st International Workshop on Computational Approaches to Historical Language Change

We introduce novel computational models for modeling semantic bleaching, a widespread category of change in which words become more abstract or lose elements of meaning, like the development of “arrive” from its earlier meaning ‘become at shore.’ We validate our methods on a widespread case of bleaching in English: de-adjectival adverbs that originate as manner adverbs (as in “awfully behaved”) and later become intensifying adverbs (as in “awfully nice”). Our methods formally quantify three reflexes of bleaching: decreasing similarity to the source meaning (e.g., “awful”), increasing similarity to a fully bleached prototype (e.g., “very”), and increasing productivity (e.g., the breadth of adjectives that an adverb modifies). We also test a new causal model and find evidence that bleaching is initially triggered in contexts such as “conspicuously evident” and “insanely jealous”, where an adverb premodifies a semantically similar adjective. These contexts provide a form of “bridging context” (Evans and Wilkins, 2000) that allow a manner adverb to be reinterpreted as an intensifying adverb similar to “very”.

2018

pdf bib
Noising and Denoising Natural Language: Diverse Backtranslation for Grammar Correction
Ziang Xie | Guillaume Genthial | Stanley Xie | Andrew Ng | Dan Jurafsky
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

Translation-based methods for grammar correction that directly map noisy, ungrammatical text to their clean counterparts are able to correct a broad range of errors; however, such techniques are bottlenecked by the need for a large parallel corpus of noisy and clean sentence pairs. In this paper, we consider synthesizing parallel data by noising a clean monolingual corpus. While most previous approaches introduce perturbations using features computed from local context windows, we instead develop error generation processes using a neural sequence transduction model trained to translate clean examples to their noisy counterparts. Given a corpus of clean examples, we propose beam search noising procedures to synthesize additional noisy examples that human evaluators were nearly unable to discriminate from nonsynthesized examples. Surprisingly, when trained on additional data synthesized using our best-performing noising scheme, our model approaches the same performance as when trained on additional nonsynthesized data.

pdf bib
Deconfounded Lexicon Induction for Interpretable Social Science
Reid Pryzant | Kelly Shen | Dan Jurafsky | Stefan Wagner
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

NLP algorithms are increasingly used in computational social science to take linguistic observations and predict outcomes like human preferences or actions. Making these social models transparent and interpretable often requires identifying features in the input that predict outcomes while also controlling for potential confounds. We formalize this need as a new task: inducing a lexicon that is predictive of a set of target variables yet uncorrelated to a set of confounding variables. We introduce two deep learning algorithms for the task. The first uses a bifurcated architecture to separate the explanatory power of the text and confounds. The second uses an adversarial discriminator to force confound-invariant text encodings. Both elicit lexicons from learned weights and attentional scores. We use them to induce lexicons that are predictive of timely responses to consumer complaints (controlling for product), enrollment from course descriptions (controlling for subject), and sales from product descriptions (controlling for seller). In each domain our algorithms pick words that are associated with narrative persuasion; more predictive and less confound-related than those of standard feature weighting and lexicon induction techniques like regression and log odds.

pdf bib
Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context
Urvashi Khandelwal | He He | Peng Qi | Dan Jurafsky
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

We know very little about how neural language models (LM) use prior linguistic context. In this paper, we investigate the role of context in an LSTM LM, through ablation studies. Specifically, we analyze the increase in perplexity when prior context words are shuffled, replaced, or dropped. On two standard datasets, Penn Treebank and WikiText-2, we find that the model is capable of using about 200 tokens of context on average, but sharply distinguishes nearby context (recent 50 tokens) from the distant history. The model is highly sensitive to the order of words within the most recent sentence, but ignores word order in the long-range context (beyond 50 tokens), suggesting the distant past is modeled only as a rough semantic field or topic. We further find that the neural caching model (Grave et al., 2017b) especially helps the LSTM to copy words from within this distant context. Overall, our analysis not only provides a better understanding of how neural LMs use their context, but also sheds light on recent success from cache-based models.

pdf bib
JESC: Japanese-English Subtitle Corpus
Reid Pryzant | Youngjoo Chung | Dan Jurafsky | Denny Britz
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf bib
RtGender: A Corpus for Studying Differential Responses to Gender
Rob Voigt | David Jurgens | Vinodkumar Prabhakaran | Dan Jurafsky | Yulia Tsvetkov
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf bib
Measuring the Evolution of a Scientific Field through Citation Frames
David Jurgens | Srijan Kumar | Raine Hoover | Dan McFarland | Dan Jurafsky
Transactions of the Association for Computational Linguistics, Volume 6

Citations have long been used to characterize the state of a scientific field and to identify influential works. However, writers use citations for different purposes, and this varied purpose influences uptake by future scholars. Unfortunately, our understanding of how scholars use and frame citations has been limited to small-scale manual citation analysis of individual papers. We perform the largest behavioral study of citations to date, analyzing how scientific works frame their contributions through different types of citations and how this framing affects the field as a whole. We introduce a new dataset of nearly 2,000 citations annotated for their function, and use it to develop a state-of-the-art classifier and label the papers of an entire field: Natural Language Processing. We then show how differences in framing affect scientific uptake and reveal the evolution of the publication venues and the field as a whole. We demonstrate that authors are sensitive to discourse structure and publication venue when citing, and that how a paper frames its work through citations is predictive of the citation count it will receive. Finally, we use changes in citation framing to show that the field of NLP is undergoing a significant increase in consensus.

pdf bib
Detecting Institutional Dialog Acts in Police Traffic Stops
Vinodkumar Prabhakaran | Camilla Griffiths | Hang Su | Prateek Verma | Nelson Morgan | Jennifer L. Eberhardt | Dan Jurafsky
Transactions of the Association for Computational Linguistics, Volume 6

We apply computational dialog methods to police body-worn camera footage to model conversations between police officers and community members in traffic stops. Relying on the theory of institutional talk, we develop a labeling scheme for police speech during traffic stops, and a tagger to detect institutional dialog acts (Reasons, Searches, Offering Help) from transcribed text at the turn (78% F-score) and stop (89% F-score) level. We then develop speech recognition and segmentation algorithms to detect these acts at the stop level from raw camera audio (81% F-score, with even higher accuracy for crucial acts like conveying the reason for the stop). We demonstrate that the dialog structures produced by our tagger could reveal whether officers follow law enforcement norms like introducing themselves, explaining the reason for the stop, and asking permission for searches. This work may therefore inform and aid efforts to ensure the procedural justice of police-community interactions.

pdf bib
Automatic Detection of Incoherent Speech for Diagnosing Schizophrenia
Dan Iter | Jong Yoon | Dan Jurafsky
Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic

Schizophrenia is a mental disorder which afflicts an estimated 0.7% of adults world wide. It affects many areas of mental function, often evident from incoherent speech. Diagnosing schizophrenia relies on subjective judgments resulting in disagreements even among trained clinicians. Recent studies have proposed the use of natural language processing for diagnosis by drawing on automatically-extracted linguistic features like discourse coherence and lexicon. Here, we present the first benchmark comparison of previously proposed coherence models for detecting symptoms of schizophrenia and evaluate their performance on a new dataset of recorded interviews between subjects and clinicians. We also present two alternative coherence metrics based on modern sentence embedding techniques that outperform the previous methods on our dataset. Lastly, we propose a novel computational model for reference incoherence based on ambiguous pronoun usage and show that it is a highly predictive feature on our data. While the number of subjects is limited in this pilot study, our results suggest new directions for diagnosing common symptoms of schizophrenia.

pdf bib
Textual Analogy Parsing: What’s Shared and What’s Compared among Analogous Facts
Matthew Lamm | Arun Chaganty | Christopher D. Manning | Dan Jurafsky | Percy Liang
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

To understand a sentence like “whereas only 10% of White Americans live at or below the poverty line, 28% of African Americans do” it is important not only to identify individual facts, e.g., poverty rates of distinct demographic groups, but also the higher-order relations between them, e.g., the disparity between them. In this paper, we propose the task of Textual Analogy Parsing (TAP) to model this higher-order meaning. Given a sentence such as the one above, TAP outputs a frame-style meaning representation which explicitly specifies what is shared (e.g., poverty rates) and what is compared (e.g., White Americans vs. African Americans, 10% vs. 28%) between its component facts. Such a meaning representation can enable new applications that rely on discourse understanding such as automated chart generation from quantitative text. We present a new dataset for TAP, baselines, and a model that successfully uses an ILP to enforce the structural constraints of the problem.

pdf bib
Framing and Agenda-setting in Russian News: a Computational Analysis of Intricate Political Strategies
Anjalie Field | Doron Kliger | Shuly Wintner | Jennifer Pan | Dan Jurafsky | Yulia Tsvetkov
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Amidst growing concern over media manipulation, NLP attention has focused on overt strategies like censorship and “fake news”. Here, we draw on two concepts from political science literature to explore subtler strategies for government media manipulation: agenda-setting (selecting what topics to cover) and framing (deciding how topics are covered). We analyze 13 years (100K articles) of the Russian newspaper Izvestia and identify a strategy of distraction: articles mention the U.S. more frequently in the month directly following an economic downturn in Russia. We introduce embedding-based methods for cross-lingually projecting English frames to Russian, and discover that these articles emphasize U.S. moral failings and threats to the U.S. Our work offers new ways to identify subtle media manipulation strategies at the intersection of agenda-setting and framing.

2017

pdf bib
Neural Net Models of Open-domain Discourse Coherence
Jiwei Li | Dan Jurafsky
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

Discourse coherence is strongly associated with text quality, making it important to natural language generation and understanding. Yet existing models of coherence focus on measuring individual aspects of coherence (lexical overlap, rhetorical structure, entity centering) in narrow domains. In this paper, we describe domain-independent neural models of discourse coherence that are capable of measuring multiple aspects of coherence in existing sentences and can maintain coherence while generating new sentences. We study both discriminative models that learn to distinguish coherent from incoherent discourse, and generative models that produce coherent text, including a novel neural latent-variable Markovian generative model that captures the latent discourse dependencies between sentences in a text. Our work achieves state-of-the-art performance on multiple coherence evaluations, and marks an initial step in generating coherent texts given discourse contexts.

pdf bib
Adversarial Learning for Neural Dialogue Generation
Jiwei Li | Will Monroe | Tianlin Shi | Sébastien Jean | Alan Ritter | Dan Jurafsky
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

We apply adversarial training to open-domain dialogue generation, training a system to produce sequences that are indistinguishable from human-generated dialogue utterances. We cast the task as a reinforcement learning problem where we jointly train two systems: a generative model to produce response sequences, and a discriminator—analagous to the human evaluator in the Turing test— to distinguish between the human-generated dialogues and the machine-generated ones. In this generative adversarial network approach, the outputs from the discriminator are used to encourage the system towards more human-like dialogue. Further, we investigate models for adversarial evaluation that uses success in fooling an adversary as a dialogue evaluation metric, while avoiding a number of potential pitfalls. Experimental results on several metrics, including adversarial evaluation, demonstrate that the adversarially-trained system generates higher-quality responses than previous baselines

pdf bib
A Two-stage Sieve Approach for Quote Attribution
Grace Muzny | Michael Fang | Angel Chang | Dan Jurafsky
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers

We present a deterministic sieve-based system for attributing quotations in literary text and a new dataset: QuoteLi3. Quote attribution, determining who said what in a given text, is important for tasks like creating dialogue systems, and in newer areas like computational literary studies, where it creates opportunities to analyze novels at scale rather than only a few at a time. We release QuoteLi3, which contains more than 6,000 annotations linking quotes to speaker mentions and quotes to speaker entities, and introduce a new algorithm for quote attribution. Our two-stage algorithm first links quotes to mentions, then mentions to entities. Using two stages encapsulates difficult sub-problems and improves system performance. The modular design allows us to tune for overall performance or higher precision, which is useful for many real-world use cases. Our system achieves an average F-score of 87.5 across three novels, outperforming previous systems, and can be tuned for precision of 90.4 at a recall of 65.1.

pdf bib
Incorporating Dialectal Variability for Socially Equitable Language Identification
David Jurgens | Yulia Tsvetkov | Dan Jurafsky
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Language identification (LID) is a critical first step for processing multilingual text. Yet most LID systems are not designed to handle the linguistic diversity of global platforms like Twitter, where local dialects and rampant code-switching lead language classifiers to systematically miss minority dialect speakers and multilingual speakers. We propose a new dataset and a character-based sequence-to-sequence model for LID designed to support dialectal and multilingual language varieties. Our model achieves state-of-the-art performance on multiple LID benchmarks. Furthermore, in a case study using Twitter for health tracking, our method substantially increases the availability of texts written by underrepresented populations, enabling the development of “socially inclusive” NLP tools.

2016

pdf bib
Distinguishing Past, On-going, and Future Events: The EventStatus Corpus
Ruihong Huang | Ignacio Cases | Dan Jurafsky | Cleo Condoravdi | Ellen Riloff
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf bib
Inducing Domain-Specific Sentiment Lexicons from Unlabeled Corpora
William L. Hamilton | Kevin Clark | Jure Leskovec | Dan Jurafsky
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf bib
Deep Reinforcement Learning for Dialogue Generation
Jiwei Li | Will Monroe | Alan Ritter | Dan Jurafsky | Michel Galley | Jianfeng Gao
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf bib
Cultural Shift or Linguistic Drift? Comparing Two Computational Measures of Semantic Change
William L. Hamilton | Jure Leskovec | Dan Jurafsky
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf bib
Visualizing and Understanding Neural Models in NLP
Jiwei Li | Xinlei Chen | Eduard Hovy | Dan Jurafsky
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Predicting the Rise and Fall of Scientific Topics from Trends in their Rhetorical Framing
Vinodkumar Prabhakaran | William L. Hamilton | Dan McFarland | Dan Jurafsky
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change
William L. Hamilton | Jure Leskovec | Dan Jurafsky
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2015

pdf bib
A computational analysis of poetic style: Imagism and its influence on modern professional and amateur poetry
Justine T. Kao | Dan Jurafsky
Linguistic Issues in Language Technology, Volume 12, 2015 - Literature Lifts up Computational Linguistics

How do standards of poetic beauty change as a function of time and expertise? Here we use computational methods to compare the stylistic features of 359 English poems written by 19th century professional poets, Imagist poets, contemporary professional poets, and contemporary amateur poets. Building upon techniques designed to analyze style and sentiment in texts, we examine elements of poetic craft such as imagery, sound devices, emotive language, and diction. We find that contemporary professional poets use significantly more concrete words than 19th century poets, fewer emotional words, and more complex sound devices. These changes are consistent with the tenets of Imagism, an early 20thcentury literary movement. Further analyses show that contemporary amateur poems resemble 19th century professional poems more than contemporary professional poems on several dimensions. The stylistic similarities between contemporary amateur poems and 19th century professional poems suggest that elite standards of poetic beauty in the past “trickled down” to influence amateur works in the present. Our results highlight the influence of Imagism on the modern aesthetic and reveal the dynamics between “high” and “low” art. We suggest that computational linguistics may shed light on the forces and trends that shape poetic style.

pdf bib
Lexicon-Free Conversational Speech Recognition with Neural Networks
Andrew Maas | Ziang Xie | Dan Jurafsky | Andrew Ng
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Do Multi-Sense Embeddings Improve Natural Language Understanding?
Jiwei Li | Dan Jurafsky
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf bib
When Are Tree Structures Necessary for Deep Learning of Representations?
Jiwei Li | Thang Luong | Dan Jurafsky | Eduard Hovy
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf bib
A Hierarchical Neural Autoencoder for Paragraphs and Documents
Jiwei Li | Thang Luong | Dan Jurafsky
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

pdf bib
The Users Who Say ‘Ni’: Audience Identification in Chinese-language Restaurant Reviews
Rob Voigt | Dan Jurafsky
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

2014

pdf bib
On the Importance of Text Analysis for Stock Price Prediction
Heeyoung Lee | Mihai Surdeanu | Bill MacCartney | Dan Jurafsky
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

We investigate the importance of text analysis for stock price prediction. In particular, we introduce a system that forecasts companies’ stock price changes (UP, DOWN, STAY) in response to financial events reported in 8-K documents. Our results indicate that using text boosts prediction accuracy over 10% (relative) over a strong baseline that incorporates many financially-rooted features. This impact is most important in the short term (i.e., the next day after the financial event) but persists for up to five days.

pdf bib
Event Extraction Using Distant Supervision
Kevin Reschke | Martin Jankowiak | Mihai Surdeanu | Christopher Manning | Daniel Jurafsky
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

Distant supervision is a successful paradigm that gathers training data for information extraction systems by automatically aligning vast databases of facts with text. Previous work has demonstrated its usefulness for the extraction of binary relations such as a person’s employer or a film’s director. Here, we extend the distant supervision approach to template-based event extraction, focusing on the extraction of passenger counts, aircraft types, and other facts concerning airplane crash events. We present a new publicly available dataset and event extraction task in the plane crash domain based on Wikipedia infoboxes and newswire text. Using this dataset, we conduct a preliminary evaluation of four distantly supervised extraction models which assign named entity mentions in text to entries in the event template. Our results indicate that joint inference over sequences of candidate entity mentions is beneficial. Furthermore, we demonstrate that the Searn algorithm outperforms a linear-chain CRF and strong baselines with local inference.

pdf bib
Obituary: Charles J. Fillmore
Dan Jurafsky
Computational Linguistics, Volume 40, Issue 3 - September 2014

2013

pdf bib
Breaking Out of Local Optima with Count Transforms and Model Recombination: A Study in Grammar Induction
Valentin I. Spitkovsky | Hiyan Alshawi | Daniel Jurafsky
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf bib
A computational approach to politeness with application to social factors
Cristian Danescu-Niculescu-Mizil | Moritz Sudhof | Dan Jurafsky | Jure Leskovec | Christopher Potts
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Linguistic Models for Analyzing and Detecting Biased Language
Marta Recasens | Cristian Danescu-Niculescu-Mizil | Dan Jurafsky
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Implicatures and Nested Beliefs in Approximate Decentralized-POMDPs
Adam Vogel | Christopher Potts | Dan Jurafsky
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf bib
Generating Recommendation Dialogs by Extracting Information from User Reviews
Kevin Reschke | Adam Vogel | Dan Jurafsky
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf bib
Deterministic Coreference Resolution Based on Entity-Centric, Precision-Ranked Rules
Heeyoung Lee | Angel Chang | Yves Peirsman | Nathanael Chambers | Mihai Surdeanu | Dan Jurafsky
Computational Linguistics, Volume 39, Issue 4 - December 2013

pdf bib
Same Referent, Different Words: Unsupervised Mining of Opaque Coreferent Mentions
Marta Recasens | Matthew Can | Daniel Jurafsky
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Emergence of Gricean Maxims from Multi-Agent Decision Theory
Adam Vogel | Max Bodoia | Christopher Potts | Daniel Jurafsky
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Tradition and Modernity in 20th Century Chinese Poetry
Rob Voigt | Dan Jurafsky
Proceedings of the Workshop on Computational Linguistics for Literature

pdf bib
Positive Diversity Tuning for Machine Translation System Combination
Daniel Cer | Christopher D. Manning | Dan Jurafsky
Proceedings of the Eighth Workshop on Statistical Machine Translation

2012

pdf bib
Parsing Time: Learning to Interpret Time Expressions
Gabor Angeli | Christopher Manning | Daniel Jurafsky
Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Capitalization Cues Improve Dependency Grammar Induction
Valentin I. Spitkovsky | Hiyan Alshawi | Daniel Jurafsky
Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure

pdf bib
A Computational Analysis of Style, Affect, and Imagery in Contemporary Poetry
Justine Kao | Dan Jurafsky
Proceedings of the NAACL-HLT 2012 Workshop on Computational Linguistics for Literature

pdf bib
Towards a Literary Machine Translation: The Role of Referential Cohesion
Rob Voigt | Dan Jurafsky
Proceedings of the NAACL-HLT 2012 Workshop on Computational Linguistics for Literature

pdf bib
Towards a Computational History of the ACL: 1980-2008
Ashton Anderson | Dan Jurafsky | Daniel A. McFarland
Proceedings of the ACL-2012 Special Workshop on Rediscovering 50 Years of Discoveries

pdf bib
He Said, She Said: Gender in the ACL Anthology
Adam Vogel | Dan Jurafsky
Proceedings of the ACL-2012 Special Workshop on Rediscovering 50 Years of Discoveries

pdf bib
Joint Entity and Event Coreference Resolution across Documents
Heeyoung Lee | Marta Recasens | Angel Chang | Mihai Surdeanu | Dan Jurafsky
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

pdf bib
Three Dependency-and-Boundary Models for Grammar Induction
Valentin I. Spitkovsky | Hiyan Alshawi | Daniel Jurafsky
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

2011

pdf bib
Template-Based Information Extraction without the Templates
Nathanael Chambers | Dan Jurafsky
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Using Query Patterns to Learn the Duration of Events
Andrey Gusev | Nathanael Chambers | Divye Raj Khilnani | Pranav Khaitan | Steven Bethard | Dan Jurafsky
Proceedings of the Ninth International Conference on Computational Semantics (IWCS 2011)

pdf bib
Punctuation: Making a Point in Unsupervised Dependency Parsing
Valentin I. Spitkovsky | Hiyan Alshawi | Daniel Jurafsky
Proceedings of the Fifteenth Conference on Computational Natural Language Learning

pdf bib
A Study of Academic Collaborations in Computational Linguistics using a Latent Mixture of Authors Model
Nikhil Johri | Daniel Ramage | Daniel McFarland | Daniel Jurafsky
Proceedings of the 5th ACL-HLT Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities

pdf bib
Stanford’s Multi-Pass Sieve Coreference Resolution System at the CoNLL-2011 Shared Task
Heeyoung Lee | Yves Peirsman | Angel Chang | Nathanael Chambers | Mihai Surdeanu | Dan Jurafsky
Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task

pdf bib
Lateen EM: Unsupervised Training with Multiple Objectives, Applied to Dependency Grammar Induction
Valentin I. Spitkovsky | Hiyan Alshawi | Daniel Jurafsky
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

pdf bib
Unsupervised Dependency Parsing without Gold Part-of-Speech Tags
Valentin I. Spitkovsky | Hiyan Alshawi | Angel X. Chang | Daniel Jurafsky
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

2010

pdf bib
Improving the Use of Pseudo-Words for Evaluating Selectional Preferences
Nathanael Chambers | Daniel Jurafsky
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

pdf bib
Learning to Follow Navigational Directions
Adam Vogel | Daniel Jurafsky
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

pdf bib
Profiting from Mark-Up: Hyper-Text Annotations for Guided Parsing
Valentin I. Spitkovsky | Daniel Jurafsky | Hiyan Alshawi
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

pdf bib
A Multi-Pass Sieve for Coreference Resolution
Karthik Raghunathan | Heeyoung Lee | Sudarshan Rangarajan | Nathanael Chambers | Mihai Surdeanu | Dan Jurafsky | Christopher Manning
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf bib
The Best Lexical Metric for Phrase-Based Statistical MT System Optimization
Daniel Cer | Christopher D. Manning | Daniel Jurafsky
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
From Baby Steps to Leapfrog: How “Less is More” in Unsupervised Dependency Parsing
Valentin I. Spitkovsky | Hiyan Alshawi | Daniel Jurafsky
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
Phrasal: A Statistical Machine Translation Toolkit for Exploring New Model Features
Daniel Cer | Michel Galley | Daniel Jurafsky | Christopher D. Manning
Proceedings of the NAACL HLT 2010 Demonstration Session

pdf bib
Viterbi Training Improves Unsupervised Dependency Parsing
Valentin I. Spitkovsky | Hiyan Alshawi | Daniel Jurafsky | Christopher D. Manning
Proceedings of the Fourteenth Conference on Computational Natural Language Learning

pdf bib
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)
Chu-Ren Huang | Dan Jurafsky
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

pdf bib
Coling 2010: Posters
Chu-Ren Huang | Dan Jurafsky
Coling 2010: Posters

pdf bib
A Database of Narrative Schemas
Nathanael Chambers | Dan Jurafsky
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

This paper describes a new language resource of events and semantic roles that characterize real-world situations. Narrative schemas contain sets of related events (edit and publish), a temporal ordering of the events (edit before publish), and the semantic roles of the participants (authors publish books). This type of world knowledge was central to early research in natural language understanding, scripts being one of the main formalisms, they represented common sequences of events that occur in the world. Unfortunately, most of this knowledge was hand-coded and time consuming to create. Current machine learning techniques, as well as a new approach to learning through coreference chains, has allowed us to automatically extract rich event structure from open domain text in the form of narrative schemas. The narrative schema resource described in this paper contains approximately 5000 unique events combined into schemas of varying sizes. We describe the resource, how it is learned, and a new evaluation of the coverage of these schemas over unseen documents.

pdf bib
Parsing to Stanford Dependencies: Trade-offs between Speed and Accuracy
Daniel Cer | Marie-Catherine de Marneffe | Dan Jurafsky | Chris Manning
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

We investigate a number of approaches to generating Stanford Dependencies, a widely used semantically-oriented dependency representation. We examine algorithms specifically designed for dependency parsing (Nivre, Nivre Eager, Covington, Eisner, and RelEx) as well as dependencies extracted from constituent parse trees created by phrase structure parsers (Charniak, Charniak-Johnson, Bikel, Berkeley and Stanford). We found that constituent parsers systematically outperform algorithms designed specifically for dependency parsing. The most accurate method for generating dependencies is the Charniak-Johnson reranking parser, with 89% (labeled) attachment F1 score. The fastest methods are Nivre, Nivre Eager, and Covington, used with a linear classifier to make local parsing decisions, which can parse the entire Penn Treebank development set (section 22) in less than 10 seconds on an Intel Xeon E5520. However, this speed comes with a substantial drop in F1 score (about 76% for labeled attachment) compared to competing methods. By tuning how much of the search space is explored by the Charniak-Johnson parser, we are able to arrive at a balanced configuration that is both fast and nearly as good as the most accurate approaches.

2009

pdf bib
Extracting Social Meaning: Identifying Interactional Style in Spoken Conversation
Dan Jurafsky | Rajesh Ranganath | Dan McFarland
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
It’s Not You, it’s Me: Detecting Flirting and its Misperception in Speed-Dates
Rajesh Ranganath | Dan Jurafsky | Dan McFarland
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf bib
Machine Translation Evaluation with Textual Entailment Features
Sebastian Padó | Michel Galley | Daniel Jurafsky | Christopher D. Manning
Proceedings of the Fourth Workshop on Statistical Machine Translation

pdf bib
Disambiguating “DE” for Chinese-English Machine Translation
Pi-Chuan Chang | Daniel Jurafsky | Christopher D. Manning
Proceedings of the Fourth Workshop on Statistical Machine Translation

pdf bib
Discriminative Reordering with Chinese Grammatical Relations Features
Pi-Chuan Chang | Huihsin Tseng | Dan Jurafsky | Christopher D. Manning
Proceedings of the Third Workshop on Syntax and Structure in Statistical Translation (SSST-3) at NAACL HLT 2009

pdf bib
Robust Machine Translation Evaluation with Entailment Features
Sebastian Padó | Michel Galley | Dan Jurafsky | Christopher D. Manning
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP

pdf bib
Unsupervised Learning of Narrative Schemas and their Participants
Nathanael Chambers | Dan Jurafsky
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP

pdf bib
Distant supervision for relation extraction without labeled data
Mike Mintz | Steven Bills | Rion Snow | Daniel Jurafsky
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP

2008

pdf bib
Regularization and Search for Minimum Error Rate Training
Daniel Cer | Dan Jurafsky | Christopher D. Manning
Proceedings of the Third Workshop on Statistical Machine Translation

pdf bib
Which Words Are Hard to Recognize? Prosodic, Lexical, and Disfluency Factors that Increase ASR Error Rates
Sharon Goldwater | Dan Jurafsky | Christopher D. Manning
Proceedings of ACL-08: HLT

pdf bib
Unsupervised Learning of Narrative Event Chains
Nathanael Chambers | Dan Jurafsky
Proceedings of ACL-08: HLT

pdf bib
Cheap and Fast – But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks
Rion Snow | Brendan O’Connor | Daniel Jurafsky | Andrew Ng
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

pdf bib
Studying the History of Ideas Using Topic Models
David Hall | Daniel Jurafsky | Christopher D. Manning
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

pdf bib
Jointly Combining Implicit Constraints Improves Temporal Ordering
Nathanael Chambers | Daniel Jurafsky
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

2007

pdf bib
To Memorize or to Predict: Prominence labeling in Conversational Speech
Ani Nenkova | Jason Brenier | Anubha Kothari | Sasha Calhoun | Laura Whitton | David Beaver | Dan Jurafsky
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference

pdf bib
Disambiguating Between Generic and Referential “You” in Dialog
Surabhi Gupta | Matthew Purver | Dan Jurafsky
Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions

pdf bib
Classifying Temporal Relations Between Events
Nathanael Chambers | Shan Wang | Dan Jurafsky
Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions

pdf bib
Measuring Importance and Query Relevance in Topic-focused Multi-document Summarization
Surabhi Gupta | Ani Nenkova | Dan Jurafsky
Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions

pdf bib
Resolving “You” in Multi-Party Dialog
Surabhi Gupta | John Niekrasz | Matthew Purver | Dan Jurafsky
Proceedings of the 8th SIGdial Workshop on Discourse and Dialogue

pdf bib
Learning to Merge Word Senses
Rion Snow | Sushant Prakash | Daniel Jurafsky | Andrew Y. Ng
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)

2006

pdf bib
Semantic Taxonomy Induction from Heterogenous Evidence
Rion Snow | Daniel Jurafsky | Andrew Y. Ng
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics

pdf bib
Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
Dan Jurafsky | Eric Gaussier
Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing

2005

pdf bib
Morphological features help POS tagging of unknown words across language varieties
Huihsin Tseng | Daniel Jurafsky | Christopher Manning
Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing

pdf bib
A Conditional Random Field Word Segmenter for Sighan Bakeoff 2005
Huihsin Tseng | Pichuan Chang | Galen Andrew | Daniel Jurafsky | Christopher Manning
Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing

pdf bib
Semantic Role Chunking Combining Complementary Syntactic Views
Sameer Pradhan | Kadri Hacioglu | Wayne Ward | James H. Martin | Daniel Jurafsky
Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)

pdf bib
Semantic Role Labeling Using Different Syntactic Views
Sameer Pradhan | Wayne Ward | Kadri Hacioglu | James Martin | Daniel Jurafsky
Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)

2004

pdf bib
Semantic Role Labeling by Tagging Syntactic Chunks
Kadri Hacioglu | Sameer Pradhan | Wayne Ward | James H. Martin | Daniel Jurafsky
Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004

pdf bib
Shallow Semantic Parsing using Support Vector Machines
Sameer S. Pradhan | Wayne H. Ward | Kadri Hacioglu | James H. Martin | Dan Jurafsky
Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004

pdf bib
Shallow Semantic Parsing of Chinese
Honglin Sun | Daniel Jurafsky
Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004

pdf bib
Parsing Arguments of Nominalizations in English and Chinese
Sameer Pradhan | Honglin Sun | Wayne Ward | James H. Martin | Daniel Jurafsky
Proceedings of HLT-NAACL 2004: Short Papers

pdf bib
Automatic Tagging of Arabic Text: From Raw Text to Base Phrase Chunks
Mona Diab | Kadri Hacioglu | Daniel Jurafsky
Proceedings of HLT-NAACL 2004: Short Papers

2003

pdf bib
The Effect of Rhythm on Structural Disambiguation in Chinese
Honglin Sun | Dan Jurafsky
Proceedings of the Second SIGHAN Workshop on Chinese Language Processing

2002

pdf bib
Automatic Labeling of Semantic Roles
Daniel Gildea | Daniel Jurafsky
Computational Linguistics, Volume 28, Number 3, September 2002

2001

pdf bib
Knowledge-Free Induction of Inflectional Morphologies
Patrick Schone | Daniel Jurafsky
Second Meeting of the North American Chapter of the Association for Computational Linguistics

pdf bib
Is Knowledge-Free Induction of Multiword Unit Dictionary Headwords a Solved Problem?
Patrick Schone | Daniel Jurafsky
Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing

2000

pdf bib
Knowledge-Free Induction of Morphology Using Latent Semantic Analysis
Patrick Schone | Daniel Jurafsky
Fourth Conference on Computational Natural Language Learning and the Second Learning Language in Logic Workshop

pdf bib
Verb Subcategorization Frequency Differences between Business- News and Balanced Corpora: The Role of Verb Sense
Douglas Roland | Daniel Jurafsky | Lise Menn | Susanne Gahl | Elezabeth Elder | Chris Riddoch
The Workshop on Comparing Corpora

pdf bib
Dialogue act modeling for automatic tagging and recognition of conversational speech
Andreas Stolcke | Klaus Ries | Noah Coccaro | Elizabeth Shriberg | Rebecca Bates | Daniel Jurafsky | Paul Taylor | Rachel Martin | Carol Van Ess-Dykema | Marie Meteer
Computational Linguistics, Volume 26, Number 3, September 2000

pdf bib
Automatic Labeling of Semantic Roles
Daniel Gildea | Daniel Jurafsky
Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics

1998

pdf bib
How Verb Subcategorization Frequencies are Affected by Corpus Choice
Douglas Roland | Daniel Jurafsky
36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 2

pdf bib
How Verb Subcategorization Frequencies Are Affected By Corpus Choice
Douglas Roland | Daniel Jurafsky
COLING 1998 Volume 2: The 17th International Conference on Computational Linguistics

pdf bib
Lexical, Prosodic, and Syntactic Cues for Dialog Acts
Daniel Jurafsky | Elizabeth Shriberg | Barbara Fox | Traci Curl
Discourse Relations and Discourse Markers

1996

pdf bib
Learning Bias and Phonological-Rule Induction
Daniel Gildea | Daniel Jurafsky
Computational Linguistics, Volume 22, Number 4, December 1996

1995

pdf bib
Learning Phonological Rule Probabilities from Speech Corpora with Exploratory Computational Phonology
Gary Tajchman | Daniel Jurafsky | Eric Fosler
33rd Annual Meeting of the Association for Computational Linguistics

pdf bib
Automatic Induction of Finite State Transducers for Simple Phonological Rules
Daniel Gildea | Daniel Jurafsky
33rd Annual Meeting of the Association for Computational Linguistics

1990

pdf bib
Representing and Integrating Linguistic Knowledge
Daniel Jurafsky
COLING 1990 Volume 2: Papers presented to the 13th International Conference on Computational Linguistics

1988

pdf bib
Issues in Relating Syntax and Semantics
Daniel Jurafsky
Coling Budapest 1988 Volume 1: International Conference on Computational Linguistics

Search
Co-authors