Minho Park
2024
Forecasting Future International Events: A Reliable Dataset for Text-Based Event Modeling
Daehoon Gwak
|
Junwoo Park
|
Minho Park
|
ChaeHun Park
|
Hyunchan Lee
|
Edward Choi
|
Jaegul Choo
Findings of the Association for Computational Linguistics: EMNLP 2024
Predicting future international events from textual information, such as news articles, has tremendous potential for applications in global policy, strategic decision-making, and geopolitics. However, existing datasets available for this task are often limited in quality, hindering the progress of related research. In this paper, we introduce a novel dataset designed to address these limitations by leveraging the advanced reasoning capabilities of large-language models (LLMs). Our dataset features high-quality scoring labels generated through advanced prompt modeling and rigorously validated by domain experts in political science. We showcase the quality and utility of our dataset for real-world event prediction tasks, demonstrating its effectiveness through extensive experiments and analysis. Furthermore, we publicly release our dataset along with the full automation source code for data collection, labeling, and benchmarking, aiming to support and advance research in text-based event prediction.
2022
Learning to Embed Multi-Modal Contexts for Situated Conversational Agents
Haeju Lee
|
Oh Joon Kwon
|
Yunseon Choi
|
Minho Park
|
Ran Han
|
Yoonhyung Kim
|
Jinhyeon Kim
|
Youngjune Lee
|
Haebin Shin
|
Kangwook Lee
|
Kee-Eung Kim
Findings of the Association for Computational Linguistics: NAACL 2022
The Situated Interactive Multi-Modal Conversations (SIMMC) 2.0 aims to create virtual shopping assistants that can accept complex multi-modal inputs, i.e. visual appearances of objects and user utterances. It consists of four subtasks, multi-modal disambiguation (MM-Disamb), multi-modal coreference resolution (MM-Coref), multi-modal dialog state tracking (MM-DST), and response retrieval and generation. While many task-oriented dialog systems usually tackle each subtask separately, we propose a jointly learned multi-modal encoder-decoder that incorporates visual inputs and performs all four subtasks at once for efficiency. This approach won the MM-Coref and response retrieval subtasks and nominated runner-up for the remaining subtasks using a single unified model at the 10th Dialog Systems Technology Challenge (DSTC10), setting a high bar for the novel task of multi-modal task-oriented dialog systems.
Search
Co-authors
- Daehoon Gwak 1
- Junwoo Park 1
- Chaehun Park 1
- Hyunchan Lee 1
- Edward Choi 1
- show all...