A Two-stage Prompt-Based Strategy for CRMUS Track 1

Chen Mosha


Abstract
“Large Language Model (LLM) has sparked a new trend in Natural Language Processing, and an increasing number of researchers have recognized the potential of using LLM to unify diverse NLP tasks into a text-generative manner. To explore the potential of LLM for the children’s stories domain, CCL2024 has released the Commonsense Reasoning and Moral Understanding in Children’s Stories (CRMUS) task. This paper presents a straightforward yet effective two-stage prompt-based strategy for the CRMUS Track 1. In the initial stage, we use the same prompt to obtain responses from GPT-4, ERNIE-4, and Qwen-Max. In the subsequent stage, we implement a voting mechanism based on the results from the first stage. For records with inconsistent outcomes, we query GPT-4 for secondary confirmation to determine the final result. Experimental results indicate that our method achieved an average score of 79.27, securing first place in the closed domain among ten participating teams, thereby demonstrating the effectiveness of our approach.”
Anthology ID:
2024.ccl-3.35
Volume:
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)
Month:
July
Year:
2024
Address:
Taiyuan, China
Editors:
Hongfei Lin, Hongye Tan, Bin Li
Venue:
CCL
SIG:
Publisher:
Chinese Information Processing Society of China
Note:
Pages:
311–319
Language:
English
URL:
https://aclanthology.org/2024.ccl-3.35/
DOI:
Bibkey:
Cite (ACL):
Chen Mosha. 2024. A Two-stage Prompt-Based Strategy for CRMUS Track 1. In Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations), pages 311–319, Taiyuan, China. Chinese Information Processing Society of China.
Cite (Informal):
A Two-stage Prompt-Based Strategy for CRMUS Track 1 (Mosha, CCL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.ccl-3.35.pdf