Liu Daohuan


2025

This report presents the methodology and findings of prompting large language models (LLMs) for Chinese Factivity Inference (FI). We evaluated five LLMs, among which DeepSeek-R1 demonstrated the best overall performance. A combination of Chain-of-Thought (CoT), few-shot, and system-level instructions were combined for final prompting. Additionally, we introduced a pairwise task scheduling strategy and a multi-agent disagreement arbitration mechanism to further enhance inference quality. Experimental results show that the integration of prompting, scheduling, and arbitration strategies significantly improves performance, with DeepSeek-R1 achieving 91.7% overall accuracy on the evaluation set. The report also highlights findings regarding LLM behavior on FI tasks and outlines potential directions for future improvement.

2023

“Collocate list and collocation network are two widely used representation methods of colloca-tions, but they have significant weaknesses in representing contextual information. To solve thisproblem, we propose a new representation method, namely the contextualized representation ofcollocate (CRC), which highlights the importance of the position of the collocates and pins acollocate as the interaction of two dimensions: association strength and co-occurrence position. With a full image of all the collocates surrounding the node word, CRC carries the contextualinformation and makes the representation more informative and intuitive. Through three casestudies, i.e., synonym distinction, image analysis, and efficiency in lexical use, we demonstratethe advantages of CRC in practical applications. CRC is also a new quantitative tool to measurelexical usage pattern similarities for corpus-based research. It can provide a new representationframework for language researchers and learners.”