Feng Wu
2024
QDMR-based Planning-and-Solving Prompting for Complex Reasoning Tasks
Jinfeng Huang
|
Qiaoqiao She
|
Wenbin Jiang
|
Hua Wu
|
Yang Hao
|
Tong Xu
|
Feng Wu
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Chain-of-Thought prompting has improved reasoning capability of large language models (LLM). However, it still is challenging to guarantee the effectiveness and stability for questions requiring complicated reasoning. Recently, Plan-and-Solve prompting enhances the reasoning capability for complex questions by planning the solution steps firstly and then solving them step by step, but it suffers the difficulty to represent and execute the problem-solving logic of complex questions. To deal with these challenges, in this work, we propose a novel Plan-and-Solve prompting method based on Question Decomposition Meaning Representation (QDMR). Specifically, this method first allows the LLM to generate a QDMR graph to represent the problem-solving logic, which is a directed acyclic graph composed of sub-questions. Then, the LLM generates a specific solving process based on the QDMR graph. When solving each sub-question, it can locate the preceding sub-questions and their answers according to the QDMR graph, and then utilize this information for solution. Compared with existing Plan-and-Solve prompting techniques, our method can not only represent the problem-solving logic of complicated questions more accurately with the aid of QDMR graph, but also deliver the dependence information accurately for different solution steps according to the QDMR graph. In addition, with the supervised fine-tuning on the Allen Institute dataset, the decomposing capability of LLM for complicated questions can be considerably enhanced. Extensive experiments show that our method has achieve a great significance in arithmetic reasoning and commonsense reasoning task by comparing the classical Chain-of-Thought prompting and Plan-and-Solve prompting techniques, and the improvements achieved are even greater for problems with more reasoning steps.
2021
Deep Cognitive Reasoning Network for Multi-hop Question Answering over Knowledge Graphs
Jianyu Cai
|
Zhanqiu Zhang
|
Feng Wu
|
Jie Wang
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
2020
SKEP: Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis
Hao Tian
|
Can Gao
|
Xinyan Xiao
|
Hao Liu
|
Bolei He
|
Hua Wu
|
Haifeng Wang
|
Feng Wu
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Recently, sentiment analysis has seen remarkable advance with the help of pre-training approaches. However, sentiment knowledge, such as sentiment words and aspect-sentiment pairs, is ignored in the process of pre-training, despite the fact that they are widely used in traditional sentiment analysis approaches. In this paper, we introduce Sentiment Knowledge Enhanced Pre-training (SKEP) in order to learn a unified sentiment representation for multiple sentiment analysis tasks. With the help of automatically-mined knowledge, SKEP conducts sentiment masking and constructs three sentiment knowledge prediction objectives, so as to embed sentiment information at the word, polarity and aspect level into pre-trained sentiment representation. In particular, the prediction of aspect-sentiment pairs is converted into multi-label classification, aiming to capture the dependency between words in a pair. Experiments on three kinds of sentiment tasks show that SKEP significantly outperforms strong pre-training baseline, and achieves new state-of-the-art results on most of the test datasets. We release our code at https://github.com/baidu/Senta.
Search
Co-authors
- Hua Wu 2
- Jinfeng Huang 1
- Qiaoqiao She 1
- Wenbin Jiang 1
- Yang Hao 1
- show all...