Vladimir Eidelman


2022

pdf bib
An Item Response Theory Framework for Persuasion
Anastassia Kornilova | Vladimir Eidelman | Daniel Douglass
Findings of the Association for Computational Linguistics: NAACL 2022

In this paper, we apply Item Response Theory, popular in education and political science research, to the analysis of argument persuasiveness in language. We empirically evaluate the model’s performance on three datasets, including a novel dataset in the area of political advocacy. We show the advantages of separating these components under several style and content representations, including evaluating the ability of the speaker embeddings generated by the model to parallel real-world observations about persuadability.

2019

pdf bib
BillSum: A Corpus for Automatic Summarization of US Legislation
Anastassia Kornilova | Vladimir Eidelman
Proceedings of the 2nd Workshop on New Frontiers in Summarization

Automatic summarization methods have been studied on a variety of domains, including news and scientific articles. Yet, legislation has not previously been considered for this task, despite US Congress and state governments releasing tens of thousands of bills every year. In this paper, we introduce BillSum, the first dataset for summarization of US Congressional and California state bills. We explain the properties of the dataset that make it more challenging to process than other domains. Then, we benchmark extractive methods that consider neural sentence representations and traditional contextual features. Finally, we demonstrate that models built on Congressional bills can be used to summarize California billa, thus, showing that methods developed on this dataset can transfer to states without human-written summaries.

2018

pdf bib
Party Matters: Enhancing Legislative Embeddings with Author Attributes for Vote Prediction
Anastassia Kornilova | Daniel Argyle | Vladimir Eidelman
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Predicting how Congressional legislators will vote is important for understanding their past and future behavior. However, previous work on roll-call prediction has been limited to single session settings, thus not allowing for generalization across sessions. In this paper, we show that text alone is insufficient for modeling voting outcomes in new contexts, as session changes lead to changes in the underlying data generation process. We propose a novel neural method for encoding documents alongside additional metadata, achieving an average of a 4% boost in accuracy over the previous state-of-the-art.

pdf bib
How Predictable is Your State? Leveraging Lexical and Contextual Information for Predicting Legislative Floor Action at the State Level
Vladimir Eidelman | Anastassia Kornilova | Daniel Argyle
Proceedings of the 27th International Conference on Computational Linguistics

Modeling U.S. Congressional legislation and roll-call votes has received significant attention in previous literature, and while legislators across 50 state governments and D.C. propose over 100,000 bills each year, enacting over 30% of them on average, state level analysis has received relatively less attention due in part to the difficulty in obtaining the necessary data. Since each state legislature is guided by their own procedures, politics and issues, however, it is difficult to qualitatively asses the factors that affect the likelihood of a legislative initiative succeeding. We present several methods for modeling the likelihood of a bill receiving floor action across all 50 states and D.C. We utilize the lexical content of over 1 million bills, along with contextual legislature and legislator derived features to build our predictive models, allowing a comparison of what factors are important to the lawmaking process. Furthermore, we show that these signals hold complementary predictive power, together achieving an average improvement in accuracy of 18% over state specific baselines.

2014

pdf bib
Polylingual Tree-Based Topic Models for Translation Domain Adaptation
Yuening Hu | Ke Zhai | Vladimir Eidelman | Jordan Boyd-Graber
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2013

pdf bib
Online Relative Margin Maximization for Statistical Machine Translation
Vladimir Eidelman | Yuval Marton | Philip Resnik
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Mr. MIRA: Open-Source Large-Margin Structured Learning on MapReduce
Vladimir Eidelman | Ke Wu | Ferhan Ture | Philip Resnik | Jimmy Lin
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations

pdf bib
Towards Efficient Large-Scale Feature-Rich Statistical Machine Translation
Vladimir Eidelman | Ke Wu | Ferhan Ture | Philip Resnik | Jimmy Lin
Proceedings of the Eighth Workshop on Statistical Machine Translation

2012

pdf bib
Topic Models for Dynamic Translation Model Adaptation
Vladimir Eidelman | Jordan Boyd-Graber | Philip Resnik
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf bib
Optimization Strategies for Online Large-Margin Learning in Machine Translation
Vladimir Eidelman
Proceedings of the Seventh Workshop on Statistical Machine Translation

pdf bib
Unsupervised Feature-Rich Clustering
Vladimir Eidelman
Proceedings of COLING 2012: Posters

2011

pdf bib
Noisy SMS Machine Translation in Low-Density Languages
Vladimir Eidelman | Kristy Hollingshead | Philip Resnik
Proceedings of the Sixth Workshop on Statistical Machine Translation

pdf bib
The Value of Monolingual Crowdsourcing in a Real-World Translation Scenario: Simulation using Haitian Creole Emergency SMS Messages
Chang Hu | Philip Resnik | Yakov Kronrod | Vladimir Eidelman | Olivia Buzek | Benjamin B. Bederson
Proceedings of the Sixth Workshop on Statistical Machine Translation

2010

pdf bib
cdec: A Decoder, Alignment, and Learning Framework for Finite-State and Context-Free Translation Models
Chris Dyer | Adam Lopez | Juri Ganitkevitch | Jonathan Weese | Ferhan Ture | Phil Blunsom | Hendra Setiawan | Vladimir Eidelman | Philip Resnik
Proceedings of the ACL 2010 System Demonstrations

pdf bib
Lessons Learned in Part-of-Speech Tagging of Conversational Speech
Vladimir Eidelman | Zhongqiang Huang | Mary Harper
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf bib
The University of Maryland Statistical Machine Translation System for the Fifth Workshop on Machine Translation
Vladimir Eidelman | Chris Dyer | Philip Resnik
Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR

2009

pdf bib
Improving A Simple Bigram HMM Part-of-Speech Tagger by Latent Annotation and Self-Training
Zhongqiang Huang | Vladimir Eidelman | Mary Harper
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers

2008

pdf bib
Inferring Activity Time in News through Event Modeling
Vladimir Eidelman
Proceedings of the ACL-08: HLT Student Research Workshop

pdf bib
BART: A Modular Toolkit for Coreference Resolution
Yannick Versley | Simone Paolo Ponzetto | Massimo Poesio | Vladimir Eidelman | Alan Jern | Jason Smith | Xiaofeng Yang | Alessandro Moschitti
Proceedings of the ACL-08: HLT Demo Session

pdf bib
BART: A modular toolkit for coreference resolution
Yannick Versley | Simone Ponzetto | Massimo Poesio | Vladimir Eidelman | Alan Jern | Jason Smith | Xiaofeng Yang | Alessandro Moschitti
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

Developing a full coreference system able to run all the way from raw text to semantic interpretation is a considerable engineering effort. Accordingly, there is very limited availability of off-the shelf tools for researchers whose interests are not primarily in coreference or others who want to concentrate on a specific aspect of the problem. We present BART, a highly modular toolkit for developing coreference applications. In the Johns Hopkins workshop on using lexical and encyclopedic knowledge for entity disambiguation, the toolkit was used to extend a reimplementation of Soon et al.’s proposal with a variety of additional syntactic and knowledge-based features, and experiment with alternative resolution processes, preprocessing tools, and classifiers. BART has been released as open source software and is available from http://www.sfs.uni-tuebingen.de/~versley/BART