Alternative non-BERT model choices for the textual classification in low-resource languages and environments

Syed Mustavi Maheen, Moshiur Rahman Faisal, Md. Rafakat Rahman, Md. Shahriar Karim


Abstract
Natural Language Processing (NLP) tasks in non-dominant and low-resource languages have not experienced significant progress. Although pre-trained BERT models are available, GPU-dependency, large memory requirement, and data scarcity often limit their applicability. As a solution, this paper proposes a fusion chain architecture comprised of one or more layers of CNN, LSTM, and BiLSTM and identifies precise configuration and chain length. The study shows that a simpler, CPU-trainable non-BERT fusion CNN + BiLSTM + CNN is sufficient to surpass the textual classification performance of the BERT-related models in resource-limited languages and environments. The fusion architecture competitively approaches the state-of-the-art accuracy in several Bengali NLP tasks and a six-class emotion detection task for a newly developed Bengali dataset. Interestingly, the performance of the identified fusion model, for instance, CNN + BiLSTM + CNN, also holds for other lowresource languages and environments. Efficacy study shows that the CNN + BiLSTM + CNN model outperforms BERT implementation for Vietnamese languages and performs almost equally in English NLP tasks experiencing artificial data scarcity. For the GLUE benchmark and other datasets such as Emotion, IMDB, and Intent classification, the CNN + BiLSTM + CNN model often surpasses or competes with BERT-base, TinyBERT, DistilBERT, and mBERT. Besides, a position-sensitive selfattention layer role further improves the fusion models’ performance in the Bengali emotion classification. The models are also compressible to as low as ≈ 5× smaller through pruning and retraining, making them more viable for resource-constrained environments. Together, this study may help NLP practitioners and serve as a blueprint for NLP model choices in textual classification for low-resource languages and environments.
Anthology ID:
2022.deeplo-1.20
Volume:
Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing
Month:
July
Year:
2022
Address:
Hybrid
Editors:
Colin Cherry, Angela Fan, George Foster, Gholamreza (Reza) Haffari, Shahram Khadivi, Nanyun (Violet) Peng, Xiang Ren, Ehsan Shareghi, Swabha Swayamdipta
Venue:
DeepLo
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
192–202
Language:
URL:
https://aclanthology.org/2022.deeplo-1.20
DOI:
10.18653/v1/2022.deeplo-1.20
Bibkey:
Cite (ACL):
Syed Mustavi Maheen, Moshiur Rahman Faisal, Md. Rafakat Rahman, and Md. Shahriar Karim. 2022. Alternative non-BERT model choices for the textual classification in low-resource languages and environments. In Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing, pages 192–202, Hybrid. Association for Computational Linguistics.
Cite (Informal):
Alternative non-BERT model choices for the textual classification in low-resource languages and environments (Mustavi Maheen et al., DeepLo 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.deeplo-1.20.pdf
Video:
 https://aclanthology.org/2022.deeplo-1.20.mp4
Data
Bengali Ekman's Six Basic Emotions CorpusCARERCoLAGLUEIMDb Movie ReviewsQNLISSTSST-2