Tawunrat Chalothorn


2025

pdf bib
FinMind-Y-Me at the Regulations Challenge Task: Financial Mind Your Meaning based on THaLLE
Pantid Chantangphol | Pornchanan Balee | Kantapong Sucharitpongpan | Chanatip Saetia | Tawunrat Chalothorn
Proceedings of the Joint Workshop of the 9th Financial Technology and Natural Language Processing (FinNLP), the 6th Financial Narrative Processing (FNP), and the 1st Workshop on Large Language Models for Finance and Legal (LLMFinLegal)

This paper presents our submission to the COLING 2025 regulation challenge, focusing on nine tasks in the regulatory and financial domains. The challenge aims to advance large language models beyond general-purpose capabilities, adapting them for regulatory and financial tasks using a unified framework of task-specific prompts and input templates. We propose a sequential fine-tuning approach that integrates reasoning-based training, tailored system prompts, and Chain-of-Thought (CoT) inference to optimize task-specific performance. This method improves accuracy and reliability across diverse tasks. Notably, CoT inference demonstrates exceptional effectiveness in handling complex scenarios and tasks requiring specific answer patterns, such as named entity recognition and financial calculations. Our model achieved an overall score of 54.801%, ranking 1st among all teams and becoming the top performer in the challenge. These results highlight the effectiveness of sequential fine-tuning, advanced reasoning techniques, and fine-tuned prompts in improving performance and scalability for complex regulatory and financial applications.

2024

pdf bib
MrRank: Improving Question Answering Retrieval System through Multi-Result Ranking Model
Danupat Khamnuansin | Tawunrat Chalothorn | Ekapol Chuangsuwanich
Findings of the Association for Computational Linguistics: ACL 2024

Large Language Models (LLMs) often struggle with hallucinations and outdated information. To address this, Information Retrieval (IR) systems can be employed to augment LLMs with up-to-date knowledge. However, existing IR techniques contain deficiencies, posing a performance bottleneck. Given the extensive array of IR systems, combining diverse approaches presents a viable strategy. Nevertheless, prior attempts have yielded restricted efficacy. In this work, we propose an approach that leverages learning-to-rank techniques to combine heterogeneous IR systems. We demonstrate the method on two Retrieval Question Answering (ReQA) tasks. Our empirical findings exhibit a significant performance enhancement, outperforming previous approaches and achieving state-of-the-art results on ReQA SQuAD.

pdf bib
Financial Product Ontology Population with Large Language Models
Chanatip Saetia | Jiratha Phruetthiset | Tawunrat Chalothorn | Monchai Lertsutthiwong | Supawat Taerungruang | Pakpoom Buabthong
Proceedings of TextGraphs-17: Graph-based Methods for Natural Language Processing

Ontology population, which aims to extract structured data to enrich domain-specific ontologies from unstructured text, typically faces challenges in terms of data scarcity and linguistic complexity, particularly in specialized fields such as retail banking. In this study, we investigate the application of large language models (LLMs) to populate domain-specific ontologies of retail banking products from Thai corporate documents. We compare traditional span-based approaches to LLMs-based generative methods, with different prompting techniques. Our findings reveal that while span-based methods struggle with data scarcity and the complex linguistic structure, LLMs-based generative approaches substantially outperform, achieving a 61.05% F1 score, with the most improvement coming from providing examples in the prompts. This improvement highlights the potential of LLMs for ontology population tasks, offering a scalable and efficient solution for structured information extraction in especially in low-resource language settings.

2023

pdf bib
Enhancing Word Discrimination and Matching in Query-by-Example Spoken term detection with Acoustic Word Embeddings
Pantid Chantangphol | Theerat Sakdejayont | Tawunrat Chalothorn
Proceedings of the 6th International Conference on Natural Language and Speech Processing (ICNLSP 2023)

2020

pdf bib
Combining Thai EDUs: Principle and Implementation
Chanatip Saetia | Supawat Taerungruang | Tawunrat Chalothorn
Proceedings of the 34th Pacific Asia Conference on Language, Information and Computation

2014

pdf bib
TJP: Identifying the Polarity of Tweets from Contexts
Tawunrat Chalothorn | Jeremy Ellman
Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)

2013

pdf bib
TJP: Using Twitter to Analyze the Polarity of Contexts
Tawunrat Chalothorn | Jeremy Ellman
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013)