Lili Kotlerman


2019

pdf bib
A Dataset of General-Purpose Rebuttal
Matan Orbach | Yonatan Bilu | Ariel Gera | Yoav Kantor | Lena Dankin | Tamar Lavee | Lili Kotlerman | Shachar Mirkin | Michal Jacovi | Ranit Aharonov | Noam Slonim
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

In Natural Language Understanding, the task of response generation is usually focused on responses to short texts, such as tweets or a turn in a dialog. Here we present a novel task of producing a critical response to a long argumentative text, and suggest a method based on general rebuttal arguments to address it. We do this in the context of the recently-suggested task of listening comprehension over argumentative content: given a speech on some specified topic, and a list of relevant arguments, the goal is to determine which of the arguments appear in the speech. The general rebuttals we describe here (in English) overcome the need for topic-specific arguments to be provided, by proving to be applicable for a large set of topics. This allows creating responses beyond the scope of topics for which specific arguments are available. All data collected during this work is freely available for research.

pdf bib
Crowd-sourcing annotation of complex NLU tasks: A case study of argumentative content annotation
Tamar Lavee | Lili Kotlerman | Matan Orbach | Yonatan Bilu | Michal Jacovi | Ranit Aharonov | Noam Slonim
Proceedings of the First Workshop on Aggregating and Analysing Crowdsourced Annotations for NLP

Recent advancements in machine reading and listening comprehension involve the annotation of long texts. Such tasks are typically time consuming, making crowd-annotations an attractive solution, yet their complexity often makes such a solution unfeasible. In particular, a major concern is that crowd annotators may be tempted to skim through long texts, and answer questions without reading thoroughly. We present a case study of adapting this type of task to the crowd. The task is to identify claims in a several minute long debate speech. We show that sentence-by-sentence annotation does not scale and that labeling only a subset of sentences is insufficient. Instead, we propose a scheme for effectively performing the full, complex task with crowd annotators, allowing the collection of large scale annotated datasets. We believe that the encountered challenges and pitfalls, as well as lessons learned, are relevant in general when collecting data for large scale natural language understanding (NLU) tasks.

pdf bib
Towards Effective Rebuttal: Listening Comprehension Using Corpus-Wide Claim Mining
Tamar Lavee | Matan Orbach | Lili Kotlerman | Yoav Kantor | Shai Gretz | Lena Dankin | Michal Jacovi | Yonatan Bilu | Ranit Aharonov | Noam Slonim
Proceedings of the 6th Workshop on Argument Mining

Engaging in a live debate requires, among other things, the ability to effectively rebut arguments claimed by your opponent. In particular, this requires identifying these arguments. Here, we suggest doing so by automatically mining claims from a corpus of news articles containing billions of sentences, and searching for them in a given speech. This raises the question of whether such claims indeed correspond to those made in spoken speeches. To this end, we collected a large dataset of 400 speeches in English discussing 200 controversial topics, mined claims for each topic, and asked annotators to identify the mined claims mentioned in each speech. Results show that in the vast majority of speeches debaters indeed make use of such claims. In addition, we present several baselines for the automatic detection of mined claims in speeches, forming the basis for future work. All collected data is freely available for research.

2018

pdf bib
A Recorded Debating Dataset
Shachar Mirkin | Michal Jacovi | Tamar Lavee | Hong-Kwang Kuo | Samuel Thomas | Leslie Sager | Lili Kotlerman | Elad Venezian | Noam Slonim
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf bib
Listening Comprehension over Argumentative Content
Shachar Mirkin | Guy Moshkowich | Matan Orbach | Lili Kotlerman | Yoav Kantor | Tamar Lavee | Michal Jacovi | Yonatan Bilu | Ranit Aharonov | Noam Slonim
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

This paper presents a task for machine listening comprehension in the argumentation domain and a corresponding dataset in English. We recorded 200 spontaneous speeches arguing for or against 50 controversial topics. For each speech, we formulated a question, aimed at confirming or rejecting the occurrence of potential arguments in the speech. Labels were collected by listening to the speech and marking which arguments were mentioned by the speaker. We applied baseline methods addressing the task, to be used as a benchmark for future work over this dataset. All data used in this work is freely available for research.

2015

pdf bib
Multi-Level Alignments As An Extensible Representation Basis for Textual Entailment Algorithms
Tae-Gil Noh | Sebastian Padó | Vered Shwartz | Ido Dagan | Vivi Nastase | Kathrin Eichler | Lili Kotlerman | Meni Adler
Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics

2013

pdf bib
ParaQuery: Making Sense of Paraphrase Collections
Lili Kotlerman | Nitin Madnani | Aoife Cahill
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations

2012

pdf bib
Sentence Clustering via Projection over Term Clusters
Lili Kotlerman | Ido Dagan | Maya Gorodetsky | Ezra Daya
*SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012)

2011

pdf bib
Classification-based Contextual Preferences
Shachar Mirkin | Ido Dagan | Lili Kotlerman | Idan Szpektor
Proceedings of the TextInfer 2011 Workshop on Textual Entailment

pdf bib
A Support Tool for Deriving Domain Taxonomies from Wikipedia
Lili Kotlerman | Zemer Avital | Ido Dagan | Amnon Lotan | Ofer Weintraub
Proceedings of the International Conference Recent Advances in Natural Language Processing 2011

2009

pdf bib
Directional Distributional Similarity for Lexical Expansion
Lili Kotlerman | Ido Dagan | Idan Szpektor | Maayan Zhitomirsky-Geffet
Proceedings of the ACL-IJCNLP 2009 Conference Short Papers