2020
pdf
bib
abs
Unsupervised Expressive Rules Provide Explainability and Assist Human Experts Grasping New Domains
Eyal Shnarch
|
Leshem Choshen
|
Guy Moshkowich
|
Ranit Aharonov
|
Noam Slonim
Findings of the Association for Computational Linguistics: EMNLP 2020
Approaching new data can be quite deterrent; you do not know how your categories of interest are realized in it, commonly, there is no labeled data at hand, and the performance of domain adaptation methods is unsatisfactory. Aiming to assist domain experts in their first steps into a new task over a new corpus, we present an unsupervised approach to reveal complex rules which cluster the unexplored corpus by its prominent categories (or facets). These rules are human-readable, thus providing an important ingredient which has become in short supply lately - explainability. Each rule provides an explanation for the commonality of all the texts it clusters together. The experts can then identify which rules best capture texts of their categories of interest, and utilize them to deepen their understanding of these categories. These rules can also bootstrap the process of data labeling by pointing at a subset of the corpus which is enriched with texts demonstrating the target categories. We present an extensive evaluation of the usefulness of these rules in identifying target categories, as well as a user study which assesses their interpretability.
2019
pdf
bib
abs
Are You Convinced? Choosing the More Convincing Evidence with a Siamese Network
Martin Gleize
|
Eyal Shnarch
|
Leshem Choshen
|
Lena Dankin
|
Guy Moshkowich
|
Ranit Aharonov
|
Noam Slonim
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
With the advancement in argument detection, we suggest to pay more attention to the challenging task of identifying the more convincing arguments. Machines capable of responding and interacting with humans in helpful ways have become ubiquitous. We now expect them to discuss with us the more delicate questions in our world, and they should do so armed with effective arguments. But what makes an argument more persuasive? What will convince you? In this paper, we present a new data set, IBM-EviConv, of pairs of evidence labeled for convincingness, designed to be more challenging than existing alternatives. We also propose a Siamese neural network architecture shown to outperform several baselines on both a prior convincingness data set and our own. Finally, we provide insights into our experimental results and the various kinds of argumentative value our method is capable of detecting.
pdf
bib
abs
Argument Invention from First Principles
Yonatan Bilu
|
Ariel Gera
|
Daniel Hershcovich
|
Benjamin Sznajder
|
Dan Lahav
|
Guy Moshkowich
|
Anael Malet
|
Assaf Gavron
|
Noam Slonim
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Competitive debaters often find themselves facing a challenging task – how to debate a topic they know very little about, with only minutes to prepare, and without access to books or the Internet? What they often do is rely on ”first principles”, commonplace arguments which are relevant to many topics, and which they have refined in past debates. In this work we aim to explicitly define a taxonomy of such principled recurring arguments, and, given a controversial topic, to automatically identify which of these arguments are relevant to the topic. As far as we know, this is the first time that this approach to argument invention is formalized and made explicit in the context of NLP. The main goal of this work is to show that it is possible to define such a taxonomy. While the taxonomy suggested here should be thought of as a ”first attempt” it is nonetheless coherent, covers well the relevant topics and coincides with what professional debaters actually argue in their speeches, and facilitates automatic argument invention for new topics.
2018
pdf
bib
abs
Listening Comprehension over Argumentative Content
Shachar Mirkin
|
Guy Moshkowich
|
Matan Orbach
|
Lili Kotlerman
|
Yoav Kantor
|
Tamar Lavee
|
Michal Jacovi
|
Yonatan Bilu
|
Ranit Aharonov
|
Noam Slonim
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
This paper presents a task for machine listening comprehension in the argumentation domain and a corresponding dataset in English. We recorded 200 spontaneous speeches arguing for or against 50 controversial topics. For each speech, we formulated a question, aimed at confirming or rejecting the occurrence of potential arguments in the speech. Labels were collected by listening to the speech and marking which arguments were mentioned by the speaker. We applied baseline methods addressing the task, to be used as a benchmark for future work over this dataset. All data used in this work is freely available for research.