Jan Christian Blaise Cruz
2025
Crowdsource, Crawl, or Generate? Creating SEA-VL, a Multicultural Vision-Language Dataset for Southeast Asia
Samuel Cahyawijaya | Holy Lovenia | Joel Ruben Antony Moniz | Tack Hwa Wong | Mohammad Rifqi Farhansyah | Thant Thiri Maung | Frederikus Hudi | David Anugraha | Muhammad Ravi Shulthan Habibi | Muhammad Reza Qorib | Amit Agarwal | Joseph Marvin Imperial | Hitesh Laxmichand Patel | Vicky Feliren | Bahrul Ilmi Nasution | Manuel Antonio Rufino | Genta Indra Winata | Rian Adam Rajagede | Carlos Rafael Catalan | Mohamed Fazli Mohamed Imam | Priyaranjan Pattnayak | Salsabila Zahirah Pranida | Kevin Pratama | Yeshil Bangera | Adisai Na-Thalang | Patricia Nicole Monderin | Yueqi Song | Christian Simon | Lynnette Hui Xian Ng | Richardy Lobo Sapan | Taki Hasan Rafi | Bin Wang | Supryadi | Kanyakorn Veerakanjana | Piyalitt Ittichaiwong | Matthew Theodore Roque | Karissa Vincentio | Takdanai Kreangphet | Phakphum Artkaew | Kadek Hendrawan Palgunadi | Yanzhi Yu | Rochana Prih Hastuti | William Nixon | Mithil Bangera | Adrian Xuan Wei Lim | Aye Hninn Khine | Hanif Muhammad Zhafran | Teddy Ferdinan | Audra Aurora Izzani | Ayushman Singh | Evan Evan | Jauza Akbar Krito | Michael Anugraha | Fenal Ashokbhai Ilasariya | Haochen Li | John Amadeo Daniswara | Filbert Aurelian Tjiaranata | Eryawan Presma Yulianrifat | Can Udomcharoenchaikit | Fadil Risdian Ansori | Mahardika Krisna Ihsani | Giang Nguyen | Anab Maulana Barik | Dan John Velasco | Rifo Ahmad Genadi | Saptarshi Saha | Chengwei Wei | Isaiah Edri W. Flores | Kenneth Chen Ko Han | Anjela Gail D. Santos | Wan Shen Lim | Kaung Si Phyo | Tim Santos | Meisyarah Dwiastuti | Jiayun Luo | Jan Christian Blaise Cruz | Ming Shan Hee | Ikhlasul Akmal Hanif | M.Alif Al Hakim | Muhammad Rizky Sya’ban | Kun Kerdthaisong | Lester James Validad Miranda | Fajri Koto | Tirana Noor Fatyanosa | Alham Fikri Aji | Jostin Jerico Rosal | Jun Kevin | Robert Wijaya | Onno P. Kampman | Ruochen Zhang | Börje F. Karlsson | Peerat Limkonchotiwat
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Samuel Cahyawijaya | Holy Lovenia | Joel Ruben Antony Moniz | Tack Hwa Wong | Mohammad Rifqi Farhansyah | Thant Thiri Maung | Frederikus Hudi | David Anugraha | Muhammad Ravi Shulthan Habibi | Muhammad Reza Qorib | Amit Agarwal | Joseph Marvin Imperial | Hitesh Laxmichand Patel | Vicky Feliren | Bahrul Ilmi Nasution | Manuel Antonio Rufino | Genta Indra Winata | Rian Adam Rajagede | Carlos Rafael Catalan | Mohamed Fazli Mohamed Imam | Priyaranjan Pattnayak | Salsabila Zahirah Pranida | Kevin Pratama | Yeshil Bangera | Adisai Na-Thalang | Patricia Nicole Monderin | Yueqi Song | Christian Simon | Lynnette Hui Xian Ng | Richardy Lobo Sapan | Taki Hasan Rafi | Bin Wang | Supryadi | Kanyakorn Veerakanjana | Piyalitt Ittichaiwong | Matthew Theodore Roque | Karissa Vincentio | Takdanai Kreangphet | Phakphum Artkaew | Kadek Hendrawan Palgunadi | Yanzhi Yu | Rochana Prih Hastuti | William Nixon | Mithil Bangera | Adrian Xuan Wei Lim | Aye Hninn Khine | Hanif Muhammad Zhafran | Teddy Ferdinan | Audra Aurora Izzani | Ayushman Singh | Evan Evan | Jauza Akbar Krito | Michael Anugraha | Fenal Ashokbhai Ilasariya | Haochen Li | John Amadeo Daniswara | Filbert Aurelian Tjiaranata | Eryawan Presma Yulianrifat | Can Udomcharoenchaikit | Fadil Risdian Ansori | Mahardika Krisna Ihsani | Giang Nguyen | Anab Maulana Barik | Dan John Velasco | Rifo Ahmad Genadi | Saptarshi Saha | Chengwei Wei | Isaiah Edri W. Flores | Kenneth Chen Ko Han | Anjela Gail D. Santos | Wan Shen Lim | Kaung Si Phyo | Tim Santos | Meisyarah Dwiastuti | Jiayun Luo | Jan Christian Blaise Cruz | Ming Shan Hee | Ikhlasul Akmal Hanif | M.Alif Al Hakim | Muhammad Rizky Sya’ban | Kun Kerdthaisong | Lester James Validad Miranda | Fajri Koto | Tirana Noor Fatyanosa | Alham Fikri Aji | Jostin Jerico Rosal | Jun Kevin | Robert Wijaya | Onno P. Kampman | Ruochen Zhang | Börje F. Karlsson | Peerat Limkonchotiwat
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Despite Southeast Asia’s (SEA) extraordinary linguistic and cultural diversity, the region remains significantly underrepresented in vision-language (VL) research, resulting in AI models that inadequately capture SEA cultural nuances. To fill this gap, we present SEA-VL, an open-source initiative dedicated to developing culturally relevant high-quality datasets for SEA languages. By involving contributors from SEA countries, SEA-VL ensures better cultural relevance and diversity, fostering greater inclusivity of underrepresented languages and cultural depictions in VL research. Our methodology employed three approaches: community-driven crowdsourcing with SEA contributors, automated image crawling, and synthetic image generation. We evaluated each method’s effectiveness in capturing cultural relevance. We found that image crawling achieves approximately ~85% cultural relevance while being more cost- and time-efficient than crowdsourcing, whereas synthetic image generation failed to accurately reflect SEA cultural nuances and contexts. Collectively, we gathered 1.28 million SEA culturally relevant images, more than 50 times larger than other existing datasets. This work bridges the representation gap in SEA, establishes a foundation for developing culturally aware AI systems for this region, and provides a replicable framework for addressing representation gaps in other underrepresented regions.
FilBench: Can LLMs Understand and Generate Filipino?
Lester James Validad Miranda | Elyanah Aco | Conner G. Manuel | Jan Christian Blaise Cruz | Joseph Marvin Imperial
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Lester James Validad Miranda | Elyanah Aco | Conner G. Manuel | Jan Christian Blaise Cruz | Joseph Marvin Imperial
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Despite the impressive performance of LLMs on English-based tasks, little is known about their capabilities in specific languages such as Filipino. In this work, we address this gap by introducing FilBench, a Filipino-centric benchmark designed to evaluate LLMs across a diverse set of tasks and capabilities in Filipino, Tagalog, and Cebuano. We carefully curate the tasks in FilBench to reflect the priorities and trends of NLP research in the Philippines such as Cultural Knowledge, Classical NLP, Reading Comprehension, and Generation. By evaluating 27 state-of-the-art LLMs on FilBench, we find that several LLMs suffer from reading comprehension and translation capabilities. Our results indicate that FilBench is challenging, with the best model, GPT-4o, achieving only a score of 72.23%. Moreover, we also find that models trained specifically for Southeast Asian languages tend to underperform on FilBench, with the highest-performing model, SEA-LION v3 70B, achieving only a score of 61.07%. Our work demonstrates the value of curating language-specific LLM benchmarks to aid in driving progress on Filipino NLP and increasing the inclusion of Philippine languages in LLM development.
Thank You, Stingray: Multilingual Large Language Models Can Not (Yet) Disambiguate Cross-Lingual Word Senses
Samuel Cahyawijaya | Ruochen Zhang | Jan Christian Blaise Cruz | Holy Lovenia | Elisa Gilbert | Hiroki Nomoto | Alham Fikri Aji
Findings of the Association for Computational Linguistics: NAACL 2025
Samuel Cahyawijaya | Ruochen Zhang | Jan Christian Blaise Cruz | Holy Lovenia | Elisa Gilbert | Hiroki Nomoto | Alham Fikri Aji
Findings of the Association for Computational Linguistics: NAACL 2025
Multilingual large language models (LLMs) have gained prominence, but concerns arise regarding their reliability beyond English. This study addresses the gap in cross-lingual semantic evaluation by introducing a novel benchmark for cross-lingual sense disambiguation, StingrayBench. In this paper, we demonstrate using false friends—words that are orthographically similar but have completely different meanings in two languages— as a possible approach to pinpoint the limitation of cross-lingual sense disambiguation in LLMs. We collect false friends in four language pairs, namely Indonesian-Malay, Indonesian-Tagalog, Chinese-Japanese, and English-German; and challenge LLMs to distinguish the use of them in context. In our analysis of various models, we observe they tend to be biased toward higher-resource languages. We also propose new metrics for quantifying the cross-lingual sense bias and comprehension based on our benchmark. Our work contributes to developing more diverse and inclusive language modeling, promoting fairer access for the wider multilingual community.
CaMMT: Benchmarking Culturally Aware Multimodal Machine Translation
Emilio Villa-Cueva | Sholpan Bolatzhanova | Diana Turmakhan | Kareem Elzeky | Henok Biadglign Ademtew | Alham Fikri Aji | Vladimir Araujo | Israel Abebe Azime | Jinheon Baek | Frederico Belcavello | Fermin Cristobal | Jan Christian Blaise Cruz | Mary Dabre | Raj Dabre | Toqeer Ehsan | Naome A Etori | Fauzan Farooqui | Jiahui Geng | Guido Ivetta | Thanmay Jayakumar | Soyeong Jeong | Zheng Wei Lim | Aishik Mandal | Sofía Martinelli | Mihail Minkov Mihaylov | Daniil Orel | Aniket Pramanick | Sukannya Purkayastha | Israfel Salazar | Haiyue Song | Tiago Timponi Torrent | Debela Desalegn Yadeta | Injy Hamed | Atnafu Lambebo Tonja | Thamar Solorio
Findings of the Association for Computational Linguistics: EMNLP 2025
Emilio Villa-Cueva | Sholpan Bolatzhanova | Diana Turmakhan | Kareem Elzeky | Henok Biadglign Ademtew | Alham Fikri Aji | Vladimir Araujo | Israel Abebe Azime | Jinheon Baek | Frederico Belcavello | Fermin Cristobal | Jan Christian Blaise Cruz | Mary Dabre | Raj Dabre | Toqeer Ehsan | Naome A Etori | Fauzan Farooqui | Jiahui Geng | Guido Ivetta | Thanmay Jayakumar | Soyeong Jeong | Zheng Wei Lim | Aishik Mandal | Sofía Martinelli | Mihail Minkov Mihaylov | Daniil Orel | Aniket Pramanick | Sukannya Purkayastha | Israfel Salazar | Haiyue Song | Tiago Timponi Torrent | Debela Desalegn Yadeta | Injy Hamed | Atnafu Lambebo Tonja | Thamar Solorio
Findings of the Association for Computational Linguistics: EMNLP 2025
Translating cultural content poses challenges for machine translation systems due to the differences in conceptualizations between cultures, where language alone may fail to convey sufficient context to capture region-specific meanings. In this work, we investigate whether images can act as cultural context in multimodal translation. We introduce CaMMT, a human-curated benchmark of over 5,800 triples of images along with parallel captions in English and regional languages. Using this dataset, we evaluate five Vision Language Models (VLMs) in text-only and text+image settings. Through automatic and human evaluations, we find that visual context generally improves translation quality, especially in handling Culturally-Specific Items (CSIs), disambiguation, and correct gender marking. By releasing CaMMT, our objective is to support broader efforts to build and evaluate multimodal translation systems that are better aligned with cultural nuance and regional variations.
MoMentS: A Comprehensive Multimodal Benchmark for Theory of Mind
Emilio Villa-Cueva | S M Masrur Ahmed | Rendi Chevi | Jan Christian Blaise Cruz | Kareem Elzeky | Fermin Cristobal | Alham Fikri Aji | Skyler Wang | Rada Mihalcea | Thamar Solorio
Findings of the Association for Computational Linguistics: EMNLP 2025
Emilio Villa-Cueva | S M Masrur Ahmed | Rendi Chevi | Jan Christian Blaise Cruz | Kareem Elzeky | Fermin Cristobal | Alham Fikri Aji | Skyler Wang | Rada Mihalcea | Thamar Solorio
Findings of the Association for Computational Linguistics: EMNLP 2025
Understanding Theory of Mind is essential for building socially intelligent multimodal agents capable of perceiving and interpreting human behavior. We introduce MoMentS (Multimodal Mental States), a comprehensive benchmark designed to assess the ToM capabilities of multimodal large language models (LLMs) through realistic, narrative-rich scenarios presented in short films. MoMentS includes over 2,300 multiple-choice questions spanning seven distinct ToM categories. The benchmark features long video context windows and realistic social interactions that provide deeper insight into characters’ mental states. We evaluate several MLLMs and find that although vision generally improves performance, models still struggle to integrate it effectively. For audio, models that process dialogues as audio do not consistently outperform transcript-based inputs. Our findings highlight the need to improve multimodal integration and point to open challenges that must be addressed to advance AI’s social understanding.
Extracting General-use Transformers for Low-resource Languages via Knowledge Distillation
Jan Christian Blaise Cruz
Proceedings of the First Workshop on Language Models for Low-Resource Languages
Jan Christian Blaise Cruz
Proceedings of the First Workshop on Language Models for Low-Resource Languages
In this paper, we propose the use of simple knowledge distillation to produce smaller and more efficient single-language transformers from Massively Multilingual Transformers (MMTs) to alleviate tradeoffs associated with the use of such in low-resource settings. Using Tagalog as a case study, we show that these smaller single-language models perform on-par with strong baselines in a variety of benchmark tasks in a much more efficient manner. Furthermore, we investigate additional steps during the distillation process that improves the soft-supervision of the target language, and provide a number of analyses and ablations to show the efficacy of the proposed method.
WorldCuisines: A Massive-Scale Benchmark for Multilingual and Multicultural Visual Question Answering on Global Cuisines
Genta Indra Winata | Frederikus Hudi | Patrick Amadeus Irawan | David Anugraha | Rifki Afina Putri | Wang Yutong | Adam Nohejl | Ubaidillah Ariq Prathama | Nedjma Ousidhoum | Afifa Amriani | Anar Rzayev | Anirban Das | Ashmari Pramodya | Aulia Adila | Bryan Wilie | Candy Olivia Mawalim | Cheng Ching Lam | Daud Abolade | Emmanuele Chersoni | Enrico Santus | Fariz Ikhwantri | Garry Kuwanto | Hanyang Zhao | Haryo Akbarianto Wibowo | Holy Lovenia | Jan Christian Blaise Cruz | Jan Wira Gotama Putra | Junho Myung | Lucky Susanto | Maria Angelica Riera Machin | Marina Zhukova | Michael Anugraha | Muhammad Farid Adilazuarda | Natasha Christabelle Santosa | Peerat Limkonchotiwat | Raj Dabre | Rio Alexander Audino | Samuel Cahyawijaya | Shi-Xiong Zhang | Stephanie Yulia Salim | Yi Zhou | Yinxuan Gui | David Ifeoluwa Adelani | En-Shiun Annie Lee | Shogo Okada | Ayu Purwarianti | Alham Fikri Aji | Taro Watanabe | Derry Tanti Wijaya | Alice Oh | Chong-Wah Ngo
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Genta Indra Winata | Frederikus Hudi | Patrick Amadeus Irawan | David Anugraha | Rifki Afina Putri | Wang Yutong | Adam Nohejl | Ubaidillah Ariq Prathama | Nedjma Ousidhoum | Afifa Amriani | Anar Rzayev | Anirban Das | Ashmari Pramodya | Aulia Adila | Bryan Wilie | Candy Olivia Mawalim | Cheng Ching Lam | Daud Abolade | Emmanuele Chersoni | Enrico Santus | Fariz Ikhwantri | Garry Kuwanto | Hanyang Zhao | Haryo Akbarianto Wibowo | Holy Lovenia | Jan Christian Blaise Cruz | Jan Wira Gotama Putra | Junho Myung | Lucky Susanto | Maria Angelica Riera Machin | Marina Zhukova | Michael Anugraha | Muhammad Farid Adilazuarda | Natasha Christabelle Santosa | Peerat Limkonchotiwat | Raj Dabre | Rio Alexander Audino | Samuel Cahyawijaya | Shi-Xiong Zhang | Stephanie Yulia Salim | Yi Zhou | Yinxuan Gui | David Ifeoluwa Adelani | En-Shiun Annie Lee | Shogo Okada | Ayu Purwarianti | Alham Fikri Aji | Taro Watanabe | Derry Tanti Wijaya | Alice Oh | Chong-Wah Ngo
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Vision Language Models (VLMs) often struggle with culture-specific knowledge, particularly in languages other than English and in underrepresented cultural contexts. To evaluate their understanding of such knowledge, we introduce WorldCuisines, a massive-scale benchmark for multilingual and multicultural, visually grounded language understanding. This benchmark includes a visual question answering (VQA) dataset with text-image pairs across 30 languages and dialects, spanning 9 language families and featuring over 1 million data points, making it the largest multicultural VQA benchmark to date. It includes tasks for identifying dish names and their origins. We provide evaluation datasets in two sizes (12k and 60k instances) alongside a training dataset (1 million instances). Our findings show that while VLMs perform better with correct location context, they struggle with adversarial contexts and predicting specific regional cuisines and languages. To support future research, we release a knowledge base with annotated food entries and images along with the VQA data.
2024
SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages
Holy Lovenia | Rahmad Mahendra | Salsabil Maulana Akbar | Lester James V. Miranda | Jennifer Santoso | Elyanah Aco | Akhdan Fadhilah | Jonibek Mansurov | Joseph Marvin Imperial | Onno P. Kampman | Joel Ruben Antony Moniz | Muhammad Ravi Shulthan Habibi | Frederikus Hudi | Railey Montalan | Ryan Ignatius | Joanito Agili Lopo | William Nixon | Börje F. Karlsson | James Jaya | Ryandito Diandaru | Yuze Gao | Patrick Amadeus | Bin Wang | Jan Christian Blaise Cruz | Chenxi Whitehouse | Ivan Halim Parmonangan | Maria Khelli | Wenyu Zhang | Lucky Susanto | Reynard Adha Ryanda | Sonny Lazuardi Hermawan | Dan John Velasco | Muhammad Dehan Al Kautsar | Willy Fitra Hendria | Yasmin Moslem | Noah Flynn | Muhammad Farid Adilazuarda | Haochen Li | Johanes Lee | R. Damanhuri | Shuo Sun | Muhammad Reza Qorib | Amirbek Djanibekov | Wei Qi Leong | Quyet V. Do | Niklas Muennighoff | Tanrada Pansuwan | Ilham Firdausi Putra | Yan Xu | Tai Ngee Chia | Ayu Purwarianti | Sebastian Ruder | William Tjhi | Peerat Limkonchotiwat | Alham Fikri Aji | Sedrick Keh | Genta Indra Winata | Ruochen Zhang | Fajri Koto | Zheng-Xin Yong | Samuel Cahyawijaya
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Holy Lovenia | Rahmad Mahendra | Salsabil Maulana Akbar | Lester James V. Miranda | Jennifer Santoso | Elyanah Aco | Akhdan Fadhilah | Jonibek Mansurov | Joseph Marvin Imperial | Onno P. Kampman | Joel Ruben Antony Moniz | Muhammad Ravi Shulthan Habibi | Frederikus Hudi | Railey Montalan | Ryan Ignatius | Joanito Agili Lopo | William Nixon | Börje F. Karlsson | James Jaya | Ryandito Diandaru | Yuze Gao | Patrick Amadeus | Bin Wang | Jan Christian Blaise Cruz | Chenxi Whitehouse | Ivan Halim Parmonangan | Maria Khelli | Wenyu Zhang | Lucky Susanto | Reynard Adha Ryanda | Sonny Lazuardi Hermawan | Dan John Velasco | Muhammad Dehan Al Kautsar | Willy Fitra Hendria | Yasmin Moslem | Noah Flynn | Muhammad Farid Adilazuarda | Haochen Li | Johanes Lee | R. Damanhuri | Shuo Sun | Muhammad Reza Qorib | Amirbek Djanibekov | Wei Qi Leong | Quyet V. Do | Niklas Muennighoff | Tanrada Pansuwan | Ilham Firdausi Putra | Yan Xu | Tai Ngee Chia | Ayu Purwarianti | Sebastian Ruder | William Tjhi | Peerat Limkonchotiwat | Alham Fikri Aji | Sedrick Keh | Genta Indra Winata | Ruochen Zhang | Fajri Koto | Zheng-Xin Yong | Samuel Cahyawijaya
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Southeast Asia (SEA) is a region rich in linguistic diversity and cultural variety, with over 1,300 indigenous languages and a population of 671 million people. However, prevailing AI models suffer from a significant lack of representation of texts, images, and audio datasets from SEA, compromising the quality of AI models for SEA languages. Evaluating models for SEA languages is challenging due to the scarcity of high-quality datasets, compounded by the dominance of English training data, raising concerns about potential cultural misrepresentation. To address these challenges, through a collaborative movement, we introduce SEACrowd, a comprehensive resource center that fills the resource gap by providing standardized corpora in nearly 1,000 SEA languages across three modalities. Through our SEACrowd benchmarks, we assess the quality of AI models on 36 indigenous languages across 13 tasks, offering valuable insights into the current AI landscape in SEA. Furthermore, we propose strategies to facilitate greater AI advancements, maximizing potential utility and resource equity for the future of AI in Southeast Asia.
Samsung R&D Institute Philippines @ WMT 2024 Indic MT Task
Matthew Theodore Roque | Carlos Rafael Catalan | Dan John Velasco | Manuel Antonio Rufino | Jan Christian Blaise Cruz
Proceedings of the Ninth Conference on Machine Translation
Matthew Theodore Roque | Carlos Rafael Catalan | Dan John Velasco | Manuel Antonio Rufino | Jan Christian Blaise Cruz
Proceedings of the Ninth Conference on Machine Translation
This paper presents the methodology developed by the Samsung R&D Institute Philippines (SRPH) Language Intelligence Team (LIT) for the WMT 2024 Shared Task on Low-Resource Indic Language Translation. We trained standard sequence-to-sequence Transformer models from scratch for both English-to-Indic and Indic-to-English translation directions. Additionally, we explored data augmentation through backtranslation and the application of noisy channel reranking to improve translation quality. A multilingual model trained across all language pairs was also investigated. Our results demonstrate the effectiveness of the multilingual model, with significant performance improvements observed in most language pairs, highlighting the potential of shared language representations in low-resource translation scenarios.
Samsung R&D Institute Philippines @ WMT 2024 Low-resource Languages of Spain Shared Task
Dan John Velasco | Manuel Antonio Rufino | Jan Christian Blaise Cruz
Proceedings of the Ninth Conference on Machine Translation
Dan John Velasco | Manuel Antonio Rufino | Jan Christian Blaise Cruz
Proceedings of the Ninth Conference on Machine Translation
This paper details the submission of Samsung R&D Institute Philippines (SRPH) Language Intelligence Team (LIT) to the WMT 2024 Low-resource Languages of Spain shared task. We trained translation models for Spanish to Aragonese, Spanish to Aranese/Occitan, and Spanish to Asturian using a standard sequence-to-sequence Transformer architecture, augmenting it with a noisy-channel reranking strategy to select better outputs during decoding. For Spanish to Asturian translation, our method reaches comparable BLEU scores to a strong commercial baseline translation system using only constrained data, backtranslations, noisy channel reranking, and a shared vocabulary spanning all four languages.
2023
Prompting Multilingual Large Language Models to Generate Code-Mixed Texts: The Case of South East Asian Languages
Zheng-Xin Yong | Ruochen Zhang | Jessica Zosa Forde | Skyler Wang | Arjun Subramonian | Holy Lovenia | Samuel Cahyawijaya | Genta Indra Winata | Lintang Sutawika | Jan Christian Blaise Cruz | Yin Lin Tan | Long Phan | Rowena Garcia | Thamar Solorio | Alham Fikri Aji
Proceedings of the 6th Workshop on Computational Approaches to Linguistic Code-Switching
Zheng-Xin Yong | Ruochen Zhang | Jessica Zosa Forde | Skyler Wang | Arjun Subramonian | Holy Lovenia | Samuel Cahyawijaya | Genta Indra Winata | Lintang Sutawika | Jan Christian Blaise Cruz | Yin Lin Tan | Long Phan | Rowena Garcia | Thamar Solorio | Alham Fikri Aji
Proceedings of the 6th Workshop on Computational Approaches to Linguistic Code-Switching
While code-mixing is a common linguistic practice in many parts of the world, collecting high-quality and low-cost code-mixed data remains a challenge for natural language processing (NLP) research. The recent proliferation of Large Language Models (LLMs) compels one to ask: how capable are these systems in generating code-mixed data? In this paper, we explore prompting multilingual LLMs in a zero-shot manner to generate code-mixed data for seven languages in South East Asia (SEA), namely Indonesian, Malay, Chinese, Tagalog, Vietnamese, Tamil, and Singlish. We find that publicly available multilingual instruction-tuned models such as BLOOMZ and Flan-T5-XXL are incapable of producing texts with phrases or clauses from different languages. ChatGPT exhibits inconsistent capabilities in generating code-mixed texts, wherein its per-formance varies depending on the prompt template and language pairing. For instance, ChatGPT generates fluent and natural Singlish texts (an English-based creole spoken in Singapore), but for English-Tamil language pair, the system mostly produces grammatically incorrect or semantically meaningless utterances. Furthermore, it may erroneously introduce languages not specified in the prompt. Based on our investigation, existing multilingual LLMs exhibit a wide range of proficiency in code-mixed data generation for SEA languages. As such, we advise against using LLMs in this context without extensive human checks.
Multilingual Large Language Models Are Not (Yet) Code-Switchers
Ruochen Zhang | Samuel Cahyawijaya | Jan Christian Blaise Cruz | Genta Winata | Alham Fikri Aji
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Ruochen Zhang | Samuel Cahyawijaya | Jan Christian Blaise Cruz | Genta Winata | Alham Fikri Aji
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Multilingual Large Language Models (LLMs) have recently shown great capabilities in a wide range of tasks, exhibiting state-of-the-art performance through zero-shot or few-shot prompting methods. While there have been extensive studies on their abilities in monolingual tasks, the investigation of their potential in the context of code-switching (CSW), the practice of alternating languages within an utterance, remains relatively uncharted. In this paper, we provide a comprehensive empirical analysis of various multilingual LLMs, benchmarking their performance across four tasks: sentiment analysis, machine translation, summarization and word-level language identification. Our results indicate that despite multilingual LLMs exhibiting promising outcomes in certain tasks using zero or few-shot prompting, they still underperform in comparison to fine-tuned models of much smaller scales. We argue that current “multilingualism’ in LLMs does not inherently imply proficiency with code-switching texts, calling for future research to bridge this discrepancy.
Current Status of NLP in South East Asia with Insights from Multilingualism and Language Diversity
Alham Fikri Aji | Jessica Zosa Forde | Alyssa Marie Loo | Lintang Sutawika | Skyler Wang | Genta Indra Winata | Zheng-Xin Yong | Ruochen Zhang | A. Seza Doğruöz | Yin Lin Tan | Jan Christian Blaise Cruz
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics: Tutorial Abstract
Alham Fikri Aji | Jessica Zosa Forde | Alyssa Marie Loo | Lintang Sutawika | Skyler Wang | Genta Indra Winata | Zheng-Xin Yong | Ruochen Zhang | A. Seza Doğruöz | Yin Lin Tan | Jan Christian Blaise Cruz
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics: Tutorial Abstract
Towards Automatic Construction of Filipino WordNet: Word Sense Induction and Synset Induction Using Sentence Embeddings
Dan John Velasco | Axel Alba | Trisha Gail Pelagio | Bryce Anthony Ramirez | Jan Christian Blaise Cruz | Unisse Chua | Briane Paul Samson | Charibeth Cheng
Proceedings of the First Workshop in South East Asian Language Processing
Dan John Velasco | Axel Alba | Trisha Gail Pelagio | Bryce Anthony Ramirez | Jan Christian Blaise Cruz | Unisse Chua | Briane Paul Samson | Charibeth Cheng
Proceedings of the First Workshop in South East Asian Language Processing
Samsung R&D Institute Philippines at WMT 2023
Jan Christian Blaise Cruz
Proceedings of the Eighth Conference on Machine Translation
Jan Christian Blaise Cruz
Proceedings of the Eighth Conference on Machine Translation
In this paper, we describe the constrained submission systems of Samsung R&D Institute Philippines to the WMT 2023 General Translation Task for two directions: en->he and he->en. Our systems comprise of Transformer-based sequence-to-sequence models that are trained with a mix of best practices: comprehensive data preprocessing pipelines, synthetic backtranslated data, and the use of noisy channel reranking during online decoding. Our models perform comparably to, and sometimes outperform, strong baseline unconstrained systems such as mBART50 M2M and NLLB 200 MoE despite having significantly fewer parameters on two public benchmarks: FLORES-200 and NTREX-128.
2022
Improving Large-scale Language Models and Resources for Filipino
Jan Christian Blaise Cruz | Charibeth Cheng
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Jan Christian Blaise Cruz | Charibeth Cheng
Proceedings of the Thirteenth Language Resources and Evaluation Conference
In this paper, we improve on existing language resources for the low-resource Filipino language in two ways. First, we outline the construction of the TLUnified dataset, a large-scale pretraining corpus that serves as an improvement over smaller existing pretraining datasets for the language in terms of scale and topic variety. Second, we pretrain new Transformer language models following the RoBERTa pretraining technique to supplant existing models trained with small corpora. Our new RoBERTa models show significant improvements over existing Filipino models in three benchmark datasets with an average gain of 4.47% test accuracy across three classification tasks with varying difficulty.
Samsung Research Philippines - Datasaur AI’s Submission for the WMT22 Large Scale Multilingual Translation Task
Jan Christian Blaise Cruz | Lintang Sutawika
Proceedings of the Seventh Conference on Machine Translation (WMT)
Jan Christian Blaise Cruz | Lintang Sutawika
Proceedings of the Seventh Conference on Machine Translation (WMT)
This paper describes the submission of the joint Samsung Research Philippines - Datasaur AI team for the WMT22 Large Scale Multilingual African Translation shared task. We approach the contest as a way to explore task composition as a solution for low-resource multilingual translation, using adapter fusion to combine multiple task adapters that learn subsets of the total translation pairs. Our final model shows performance improvements in 32 out of the 44 translation directions that we participate in when compared to a single model system trained on multiple directions at once.
2021
Data Processing Matters: SRPH-Konvergen AI’s Machine Translation System for WMT’21
Lintang Sutawika | Jan Christian Blaise Cruz
Proceedings of the Sixth Conference on Machine Translation
Lintang Sutawika | Jan Christian Blaise Cruz
Proceedings of the Sixth Conference on Machine Translation
In this paper, we describe the submission of the joint Samsung Research Philippines-Konvergen AI team for the WMT’21 Large Scale Multilingual Translation Task - Small Track 2. We submit a standard Seq2Seq Transformer model to the shared task without any training or architecture tricks, relying mainly on the strength of our data preprocessing techniques to boost performance. Our final submission model scored 22.92 average BLEU on the FLORES-101 devtest set, and scored 22.97 average BLEU on the contest’s hidden test set, ranking us sixth overall. Despite using only a standard Transformer, our model ranked first in Indonesian to Javanese, showing that data preprocessing matters equally, if not more, than cutting edge model architectures and training techniques.
2020
Localization of Fake News Detection via Multitask Transfer Learning
Jan Christian Blaise Cruz | Julianne Agatha Tan | Charibeth Cheng
Proceedings of the Twelfth Language Resources and Evaluation Conference
Jan Christian Blaise Cruz | Julianne Agatha Tan | Charibeth Cheng
Proceedings of the Twelfth Language Resources and Evaluation Conference
The use of the internet as a fast medium of spreading fake news reinforces the need for computational tools that combat it. Techniques that train fake news classifiers exist, but they all assume an abundance of resources including large labeled datasets and expert-curated corpora, which low-resource languages may not have. In this work, we make two main contributions: First, we alleviate resource scarcity by constructing the first expertly-curated benchmark dataset for fake news detection in Filipino, which we call “Fake News Filipino.” Second, we benchmark Transfer Learning (TL) techniques and show that they can be used to train robust fake news classifiers from little data, achieving 91% accuracy on our fake news dataset, reducing the error by 14% compared to established few-shot baselines. Furthermore, lifting ideas from multitask learning, we show that augmenting transformer-based transfer techniques with auxiliary language modeling losses improves their performance by adapting to writing style. Using this, we improve TL performance by 4-6%, achieving an accuracy of 96% on our best model. Lastly, we show that our method generalizes well to different types of news articles, including political news, entertainment news, and opinion articles.
Search
Fix author
Co-authors
- Alham Fikri Aji 9
- Samuel Cahyawijaya 6
- Genta Indra Winata 6
- Ruochen Zhang 6
- Holy Lovenia 5
- Dan John Velasco 5
- Lintang Sutawika 4
- Charibeth Cheng 3
- Frederikus Hudi 3
- Joseph Marvin Imperial 3
- Peerat Limkonchotiwat 3
- Lester James Validad Miranda 3
- Manuel Antonio Rufino 3
- Thamar Solorio 3
- Skyler Wang 3
- Zheng Xin Yong 3
- Elyanah Aco 2
- Muhammad Farid Adilazuarda 2
- David Anugraha 2
- Michael Anugraha 2
- Carlos Rafael Catalan 2
- Fermin Cristobal 2
- Raj Dabre 2
- Kareem Elzeky 2
- Jessica Zosa Forde 2
- Muhammad Ravi Shulthan Habibi 2
- Onno P. Kampman 2
- Börje F. Karlsson 2
- Fajri Koto 2
- Haochen Li 2
- Joel Ruben Antony Moniz 2
- William Nixon 2
- Ayu Purwarianti 2
- Muhammad Reza Qorib 2
- Matthew Theodore Roque 2
- Lucky Susanto 2
- Yin Lin Tan 2
- Emilio Villa-Cueva 2
- Bin Wang 2
- Daud Abolade 1
- David Ifeoluwa Adelani 1
- Henok Biadglign Ademtew 1
- Aulia Adila 1
- Amit Agarwal 1
- S M Masrur Ahmed 1
- Salsabil Maulana Akbar 1
- Muhammad Dehan Al Kautsar 1
- Axel Alba 1
- Patrick Amadeus 1
- Afifa Amriani 1
- Fadil Risdian Ansori 1
- Vladimir Araujo 1
- Phakphum Artkaew 1
- Rio Alexander Audino 1
- Israel Abebe Azime 1
- Jinheon Baek 1
- Yeshil Bangera 1
- Mithil Bangera 1
- Anab Maulana Barik 1
- Frederico Belcavello 1
- Sholpan Bolatzhanova 1
- Emmanuele Chersoni 1
- Rendi Chevi 1
- Tai Ngee Chia 1
- Unisse Chua 1
- Mary Dabre 1
- R. Damanhuri 1
- John Amadeo Daniswara 1
- Anirban Das 1
- Ryandito Diandaru 1
- Amirbek Djanibekov 1
- Quyet V. Do 1
- A. Seza Doğruöz 1
- Meisyarah Dwiastuti 1
- Toqeer Ehsan 1
- Naome A. Etori 1
- Evan Evan 1
- Akhdan Fadhilah 1
- Mohammad Rifqi Farhansyah 1
- Fauzan Farooqui 1
- Tirana Noor Fatyanosa 1
- Vicky Feliren 1
- Teddy Ferdinan 1
- Isaiah Edri W. Flores 1
- Noah Flynn 1
- Yuze Gao 1
- Rowena Garcia 1
- Rifo Ahmad Genadi 1
- Jiahui Geng 1
- Elisa Gilbert 1
- Yinxuan Gui 1
- M.Alif Al Hakim 1
- Injy Hamed 1
- Kenneth Chen Ko Han 1
- Ikhlasul Akmal Hanif 1
- Rochana Prih Hastuti 1
- Ming Shan Hee 1
- Willy Fitra Hendria 1
- Sonny Lazuardi Hermawan 1
- Ryan Ignatius 1
- Mahardika Krisna Ihsani 1
- Fariz Ikhwantri 1
- Fenal Ashokbhai Ilasariya 1
- Mohamed Fazli Mohamed Imam 1
- Patrick Amadeus Irawan 1
- Piyalitt Ittichaiwong 1
- Guido Ivetta 1
- Audra Aurora Izzani 1
- James Jaya 1
- Thanmay Jayakumar 1
- Soyeong Jeong 1
- Sedrick Keh 1
- Kun Kerdthaisong 1
- Jun Kevin 1
- Maria Khelli 1
- Aye Hninn Khine 1
- Takdanai Kreangphet 1
- Jauza Akbar Krito 1
- Garry Kuwanto 1
- Cheng Ching Lam 1
- Johanes Lee 1
- En-Shiun Annie Lee 1
- Wei Qi Leong 1
- Adrian Xuan Wei Lim 1
- Wan Shen Lim 1
- Zheng Wei Lim 1
- Alyssa Marie Loo 1
- Joanito Agili Lopo 1
- Jiayun Luo 1
- Rahmad Mahendra 1
- Aishik Mandal 1
- Jonibek Mansurov 1
- Conner G. Manuel 1
- Sofía Martinelli 1
- Thant Thiri Maung 1
- Candy Olivia Mawalim 1
- Rada Mihalcea 1
- Mihail Minkov Mihaylov 1
- Patricia Nicole Monderin 1
- Jann Railey Montalan 1
- Yasmin Moslem 1
- Niklas Muennighoff 1
- Junho Myung 1
- Adisai Na-Thalang 1
- Bahrul Ilmi Nasution 1
- Lynnette Hui Xian Ng 1
- Chong-Wah Ngo 1
- Giang Nguyen 1
- Adam Nohejl 1
- Hiroki Nomoto 1
- Alice Oh 1
- Shogo Okada 1
- Daniil Orel 1
- Nedjma Ousidhoum 1
- Kadek Hendrawan Palgunadi 1
- Tanrada Pansuwan 1
- Ivan Halim Parmonangan 1
- Hitesh Laxmichand Patel 1
- Priyaranjan Pattnayak 1
- Trisha Gail Pelagio 1
- Long Phan 1
- Kaung Si Phyo 1
- Aniket Pramanick 1
- Ashmari Pramodya 1
- Salsabila Zahirah Pranida 1
- Kevin Pratama 1
- Ubaidillah Ariq Prathama 1
- Sukannya Purkayastha 1
- Ilham Firdausi Putra 1
- Jan Wira Gotama Putra 1
- Rifki Afina Putri 1
- Taki Hasan Rafi 1
- Rian Adam Rajagede 1
- Bryce Anthony Ramirez 1
- Maria Angelica Riera Machin 1
- Jostin Jerico Rosal 1
- Sebastian Ruder 1
- Reynard Adha Ryanda 1
- Anar Rzayev 1
- Saptarshi Saha 1
- Israfel Salazar 1
- Stephanie Yulia Salim 1
- Briane Paul Samson 1
- Anjela Gail D. Santos 1
- Tim Santos 1
- Natasha Christabelle Santosa 1
- Jennifer Santoso 1
- Enrico Santus 1
- Richardy Lobo Sapan 1
- Christian Simon 1
- Ayushman Singh 1
- Yueqi Song 1
- Haiyue Song 1
- Arjun Subramonian 1
- Shuo Sun 1
- Supryadi 1
- Muhammad Rizky Sya’ban 1
- Julianne Agatha Tan 1
- Tiago Timponi Torrent 1
- William Tjhi 1
- Filbert Aurelian Tjiaranata 1
- Atnafu Lambebo Tonja 1
- Diana Turmakhan 1
- Can Udomcharoenchaikit 1
- Kanyakorn Veerakanjana 1
- Karissa Vincentio 1
- Taro Watanabe 1
- Chengwei Wei 1
- Chenxi Whitehouse 1
- Haryo Akbarianto Wibowo 1
- Robert Wijaya 1
- Derry Tanti Wijaya 1
- Bryan Wilie 1
- Tack Hwa Wong 1
- Yan Xu 1
- Debela Desalegn Yadeta 1
- Yanzhi Yu 1
- Eryawan Presma Yulianrifat 1
- Wang Yutong 1
- Hanif Muhammad Zhafran 1
- Wenyu Zhang 1
- Shi-Xiong Zhang 1
- Hanyang Zhao 1
- Yi Zhou 1
- Marina Zhukova 1