Abhinav Patil
2024
Filtered Corpus Training (FiCT) Shows that Language Models Can Generalize from Indirect Evidence
Abhinav Patil
|
Jaap Jumelet
|
Yu Ying Chiu
|
Andy Lapastora
|
Peter Shen
|
Lexie Wang
|
Clevis Willrich
|
Shane Steinert-Threlkeld
Transactions of the Association for Computational Linguistics, Volume 12
This paper introduces Filtered Corpus Training, a method that trains language models (LMs) on corpora with certain linguistic constructions filtered out from the training data, and uses it to measure the ability of LMs to perform linguistic generalization on the basis of indirect evidence. We apply the method to both LSTM and Transformer LMs (of roughly comparable size), developing filtered corpora that target a wide range of linguistic phenomena. Our results show that while transformers are better qua LMs (as measured by perplexity), both models perform equally and surprisingly well on linguistic generalization measures, suggesting that they are capable of generalizing from indirect evidence.
2023
SADTech@DravidianLangTech: Multimodal Sentiment Analysis of Tamil and Malayalam
Abhinav Patil
|
Sam Briggs
|
Tara Wueger
|
Daniel D. O’Connell
Proceedings of the Third Workshop on Speech and Language Technologies for Dravidian Languages
We present several models for sentiment analysis of multimodal movie reviews in Tamil and Malayalam into 5 separate classes: highly negative, negative, neutral, positive, and highly positive, based on the shared task, “Multimodal Abusive Language Detection and Sentiment Analysis” at RANLP-2023. We use transformer language models to build text and audio embeddings and then compare the performance of multiple classifier models trained on these embeddings: a Multinomial Naive Bayes baseline, a Logistic Regression, a Random Forest, and an SVM. To account for class imbalance, we use both naive resampling and SMOTE. We found that without resampling, the baseline models have the same performance as a naive Majority Class Classifier. However, with resampling, logistic regression and random forest both demonstrate gains over the baseline.
Search
Fix data
Co-authors
- Sam Briggs 1
- Yu Ying Chiu 1
- Jaap Jumelet 1
- Andy Lapastora 1
- Daniel D. O’Connell 1
- show all...