How effective is incongruity? Implications for code-mixed sarcasm detection

Aditya Shah, Chandresh Maurya


Abstract
The presence of sarcasm in conversational systems and social media like chatbots, Facebook, Twitter, etc. poses several challenges for downstream NLP tasks. This is attributed to the fact that the intended meaning of a sarcastic text is contrary to what is expressed. Further, the use of code-mix language to express sarcasm is increasing day by day. Current NLP techniques for code-mix data have limited success due to the use of different lexicon, syntax, and scarcity of labeled corpora. To solve the joint problem of code-mixing and sarcasm detection, we propose the idea of capturing incongruity through sub-word level embeddings learned via fastText. Empirical results show that our proposed model achieves an F1-score on code-mix Hinglish dataset comparable to pretrained multilingual models while training 10x faster and using a lower memory footprint.
Anthology ID:
2021.icon-main.32
Volume:
Proceedings of the 18th International Conference on Natural Language Processing (ICON)
Month:
December
Year:
2021
Address:
National Institute of Technology Silchar, Silchar, India
Editors:
Sivaji Bandyopadhyay, Sobha Lalitha Devi, Pushpak Bhattacharyya
Venue:
ICON
SIG:
Publisher:
NLP Association of India (NLPAI)
Note:
Pages:
271–276
Language:
URL:
https://aclanthology.org/2021.icon-main.32
DOI:
Bibkey:
Cite (ACL):
Aditya Shah and Chandresh Maurya. 2021. How effective is incongruity? Implications for code-mixed sarcasm detection. In Proceedings of the 18th International Conference on Natural Language Processing (ICON), pages 271–276, National Institute of Technology Silchar, Silchar, India. NLP Association of India (NLPAI).
Cite (Informal):
How effective is incongruity? Implications for code-mixed sarcasm detection (Shah & Maurya, ICON 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.icon-main.32.pdf
Code
 likemycode/codemix