2022
pdf
bib
abs
A Hybrid Approach to Cross-lingual Product Review Summarization
Saleh Soltan
|
Victor Soto
|
Ke Tran
|
Wael Hamza
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
We present a hybrid approach for product review summarization which consists of: (i) an unsupervised extractive step to extract the most important sentences out of all the reviews, and (ii) a supervised abstractive step to summarize the extracted sentences into a coherent short summary. This approach allows us to develop an efficient cross-lingual abstractive summarizer that can generate summaries in any language, given the extracted sentences out of thousands of reviews in a source language. In order to train and test the abstractive model, we create the Cross-lingual Amazon Reviews Summarization (CARS) dataset which provides English summaries for training, and English, French, Italian, Arabic, and Hindi summaries for testing based on selected English reviews. We show that the summaries generated by our model are as good as human written summaries in coherence, informativeness, non-redundancy, and fluency.
2021
pdf
bib
abs
Combining Weakly Supervised ML Techniques for Low-Resource NLU
Victor Soto
|
Konstantine Arkoudas
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers
Recent advances in transfer learning have improved the performance of virtual assistants considerably. Nevertheless, creating sophisticated voice-enabled applications for new domains remains a challenge, and meager training data is often a key bottleneck. Accordingly, unsupervised learning and SSL (semi-supervised learning) techniques continue to be of vital importance. While a number of such methods have been explored previously in isolation, in this paper we investigate the synergistic use of a number of weakly supervised techniques with a view to improving NLU (Natural Language Understanding) accuracy in low-resource settings. We explore three different approaches incorporating anonymized, unlabeled and automatically transcribed user utterances into the training process, two focused on data augmentation via SSL and another one focused on unsupervised and transfer learning. We show promising results, obtaining gains that range from 4.73% to 7.65% relative improvements on semantic error rate for each individual approach. Moreover, the combination of all three methods together yields a relative improvement of 11.77% over our current baseline model. Our methods are applicable to any new domain with minimal training data, and can be deployed over time into a cycle of continual learning.
pdf
bib
Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching
Thamar Solorio
|
Shuguang Chen
|
Alan W. Black
|
Mona Diab
|
Sunayana Sitaram
|
Victor Soto
|
Emre Yilmaz
|
Anirudh Srinivasan
Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching
2018
pdf
bib
Collecting Code-Switched Data from Social Media
Gideon Mendels
|
Victor Soto
|
Aaron Jaech
|
Julia Hirschberg
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)
pdf
bib
Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching
Gustavo Aguilar
|
Fahad AlGhamdi
|
Victor Soto
|
Thamar Solorio
|
Mona Diab
|
Julia Hirschberg
Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching
pdf
bib
abs
Joint Part-of-Speech and Language ID Tagging for Code-Switched Data
Victor Soto
|
Julia Hirschberg
Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching
Code-switching is the fluent alternation between two or more languages in conversation between bilinguals. Large populations of speakers code-switch during communication, but little effort has been made to develop tools for code-switching, including part-of-speech taggers. In this paper, we propose an approach to POS tagging of code-switched English-Spanish data based on recurrent neural networks. We test our model on known monolingual benchmarks to demonstrate that our neural POS tagging model is on par with state-of-the-art methods. We next test our code-switched methods on the Miami Bangor corpus of English Spanish conversation, focusing on two types of experiments: POS tagging alone, for which we achieve 96.34% accuracy, and joint part-of-speech and language ID tagging, which achieves similar POS tagging accuracy (96.39%) and very high language ID accuracy (98.78%). Finally, we show that our proposed models outperform other state-of-the-art code-switched taggers.
pdf
bib
abs
Named Entity Recognition on Code-Switched Data: Overview of the CALCS 2018 Shared Task
Gustavo Aguilar
|
Fahad AlGhamdi
|
Victor Soto
|
Mona Diab
|
Julia Hirschberg
|
Thamar Solorio
Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching
In the third shared task of the Computational Approaches to Linguistic Code-Switching (CALCS) workshop, we focus on Named Entity Recognition (NER) on code-switched social-media data. We divide the shared task into two competitions based on the English-Spanish (ENG-SPA) and Modern Standard Arabic-Egyptian (MSA-EGY) language pairs. We use Twitter data and 9 entity types to establish a new dataset for code-switched NER benchmarks. In addition to the CS phenomenon, the diversity of the entities and the social media challenges make the task considerably hard to process. As a result, the best scores of the competitions are 63.76% and 71.61% for ENG-SPA and MSA-EGY, respectively. We present the scores of 9 participants and discuss the most common challenges among submissions.
2016
pdf
bib
Part of Speech Tagging for Code Switched Data
Fahad AlGhamdi
|
Giovanni Molina
|
Mona Diab
|
Thamar Solorio
|
Abdelati Hawwari
|
Victor Soto
|
Julia Hirschberg
Proceedings of the Second Workshop on Computational Approaches to Code Switching