IBM MNLP IE at CASE 2021 Task 2: NLI Reranking for Zero-Shot Text Classification

Ken Barker, Parul Awasthy, Jian Ni, Radu Florian


Abstract
Supervised models can achieve very high accuracy for fine-grained text classification. In practice, however, training data may be abundant for some types but scarce or even non-existent for others. We propose a hybrid architecture that uses as much labeled data as available for fine-tuning classification models, while also allowing for types with little (few-shot) or no (zero-shot) labeled data. In particular, we pair a supervised text classification model with a Natural Language Inference (NLI) reranking model. The NLI reranker uses a textual representation of target types that allows it to score the strength with which a type is implied by a text, without requiring training data for the types. Experiments show that the NLI model is very sensitive to the choice of textual representation, but can be effective for classifying unseen types. It can also improve classification accuracy for the known types of an already highly accurate supervised model.
Anthology ID:
2021.case-1.24
Volume:
Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021)
Month:
August
Year:
2021
Address:
Online
Editor:
Ali Hürriyetoğlu
Venue:
CASE
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
193–202
Language:
URL:
https://aclanthology.org/2021.case-1.24
DOI:
10.18653/v1/2021.case-1.24
Bibkey:
Cite (ACL):
Ken Barker, Parul Awasthy, Jian Ni, and Radu Florian. 2021. IBM MNLP IE at CASE 2021 Task 2: NLI Reranking for Zero-Shot Text Classification. In Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021), pages 193–202, Online. Association for Computational Linguistics.
Cite (Informal):
IBM MNLP IE at CASE 2021 Task 2: NLI Reranking for Zero-Shot Text Classification (Barker et al., CASE 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.case-1.24.pdf