The Document Vectors Using Cosine Similarity Revisited

Zhang Bingyu, Nikolay Arefyev


Abstract
The current state-of-the-art test accuracy (97.42%) on the IMDB movie reviews dataset was reported by Thongtan and Phienthrakul (2019) and achieved by the logistic regression classifier trained on the Document Vectors using Cosine Similarity (DV-ngrams-cosine) proposed in their paper and the Bag-of-N-grams (BON) vectors scaled by Naïve Bayesian weights. While large pre-trained Transformer-based models have shown SOTA results across many datasets and tasks, the aforementioned model has not been surpassed by them, despite being much simpler and pre-trained on the IMDB dataset only. In this paper, we describe an error in the evaluation procedure of this model, which was found when we were trying to analyze its excellent performance on the IMDB dataset. We further show that the previously reported test accuracy of 97.42% is invalid and should be corrected to 93.68%. We also analyze the model performance with different amounts of training data (subsets of the IMDB dataset) and compare it to the Transformer-based RoBERTa model. The results show that while RoBERTa has a clear advantage for larger training sets, the DV-ngrams-cosine performs better than RoBERTa when the labeled training set is very small (10 or 20 documents). Finally, we introduce a sub-sampling scheme based on Naïve Bayesian weights for the training process of the DV-ngrams-cosine, which leads to faster training and better quality.
Anthology ID:
2022.insights-1.17
Volume:
Proceedings of the Third Workshop on Insights from Negative Results in NLP
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Shabnam Tafreshi, João Sedoc, Anna Rogers, Aleksandr Drozd, Anna Rumshisky, Arjun Akula
Venue:
insights
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
129–133
Language:
URL:
https://aclanthology.org/2022.insights-1.17
DOI:
10.18653/v1/2022.insights-1.17
Bibkey:
Cite (ACL):
Zhang Bingyu and Nikolay Arefyev. 2022. The Document Vectors Using Cosine Similarity Revisited. In Proceedings of the Third Workshop on Insights from Negative Results in NLP, pages 129–133, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
The Document Vectors Using Cosine Similarity Revisited (Bingyu & Arefyev, insights 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.insights-1.17.pdf
Video:
 https://aclanthology.org/2022.insights-1.17.mp4
Code
 bgzh/dv_cosine_revisited
Data
IMDb Movie Reviews