Oren Sar Shalom
Also published as: Oren Sar Shalom
2020
Proceedings of Knowledgeable NLP: the First Workshop on Integrating Structured Knowledge and Neural Networks for NLP
Oren Sar Shalom
|
Alexander Panchenko
|
Cicero dos Santos
|
Varvara Logacheva
|
Alessandro Moschitti
|
Ido Dagan
Proceedings of Knowledgeable NLP: the First Workshop on Integrating Structured Knowledge and Neural Networks for NLP
2018
Understanding Convolutional Neural Networks for Text Classification
Alon Jacovi
|
Oren Sar Shalom
|
Yoav Goldberg
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
We present an analysis into the inner workings of Convolutional Neural Networks (CNNs) for processing text. CNNs used for computer vision can be interpreted by projecting filters into image space, but for discrete sequence inputs CNNs remain a mystery. We aim to understand the method by which the networks process and classify text. We examine common hypotheses to this problem: that filters, accompanied by global max-pooling, serve as ngram detectors. We show that filters may capture several different semantic classes of ngrams by using different activation patterns, and that global max-pooling induces behavior which separates important ngrams from the rest. Finally, we show practical use cases derived from our findings in the form of model interpretability (explaining a trained model by deriving a concrete identity for each filter, bridging the gap between visualization tools in vision tasks and NLP) and prediction interpretability (explaining predictions).
Search
Co-authors
- Alexander Panchenko 1
- Cicero dos Santos 1
- Varvara Logacheva 1
- Alessandro Moschitti 1
- Ido Dagan 1
- show all...