Contrapositive Local Class Inference

Omid Kashefi, Rebecca Hwa


Abstract
Certain types of classification problems may be performed at multiple levels of granularity; for example, we might want to know the sentiment polarity of a document or a sentence, or a phrase. Often, the prediction at a greater-context (e.g., sentences or paragraphs) may be informative for a more localized prediction at a smaller semantic unit (e.g., words or phrases). However, directly inferring the most salient local features from the global prediction may overlook the semantics of this relationship. This work argues that inference along the contraposition relationship of the local prediction and the corresponding global prediction makes an inference framework that is more accurate and robust to noise. We show how this contraposition framework can be implemented as a transfer function that rewrites a greater-context from one class to another and demonstrate how an appropriate transfer function can be trained from a noisy user-generated corpus. The experimental results validate our insight that the proposed contrapositive framework outperforms the alternative approaches on resource-constrained problem domains.
Anthology ID:
2021.wnut-1.41
Volume:
Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)
Month:
November
Year:
2021
Address:
Online
Editors:
Wei Xu, Alan Ritter, Tim Baldwin, Afshin Rahimi
Venue:
WNUT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
371–380
Language:
URL:
https://aclanthology.org/2021.wnut-1.41
DOI:
10.18653/v1/2021.wnut-1.41
Bibkey:
Cite (ACL):
Omid Kashefi and Rebecca Hwa. 2021. Contrapositive Local Class Inference. In Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021), pages 371–380, Online. Association for Computational Linguistics.
Cite (Informal):
Contrapositive Local Class Inference (Kashefi & Hwa, WNUT 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.wnut-1.41.pdf
Code
 omidkashefi/contrapositive-inference