Jie Zhu


2024

pdf bib
Benchmarking Large Language Models on CFLUE - A Chinese Financial Language Understanding Evaluation Dataset
Jie Zhu | Junhui Li | Yalong Wen | Lifan Guo
Findings of the Association for Computational Linguistics ACL 2024

In light of recent breakthroughs in large language models (LLMs) that have revolutionized natural language processing (NLP), there is an urgent need for new benchmarks to keep pace with the fast development of LLMs. In this paper, we propose CFLUE, the Chinese Financial Language Understanding Evaluation benchmark, designed to assess the capability of LLMs across various dimensions. Specifically, CFLUE provides datasets tailored for both knowledge assessment and application assessment. In knowledge assessment, it consists of 38K+ multiple-choice questions with associated solution explanations. These questions serve dual purposes: answer prediction and question reasoning. In application assessment, CFLUE features 16K+ test instances across distinct groups of NLP tasks such as text classification, machine translation, relation extraction, reading comprehension, and text generation. Upon CFLUE, we conduct a thorough evaluation of representative LLMs. The results reveal that only Qwen-72B, GPT-4, and GPT-4-turbo achieve an accuracy exceeding 60% in answer prediction for knowledge assessment, suggesting that there is still substantial room for improvement in current LLMs. In application assessment, while GPT-4 and GPT-4-turbo rank as the top two performers on average, their significant advantage over open-source LLMs is noticeably diminished, given that Qwen-72B achieves the best performance in 2 out of 5 tasks. The datasets and scripts associated with CFLUE are openly accessible at https://github.com/aliyun/cflue.

2020

pdf bib
融合目标端句法的AMR-to-Text生成(AMR-to-Text Generation with Target Syntax)
Jie Zhu (朱杰) | Junhui Li (李军辉)
Proceedings of the 19th Chinese National Conference on Computational Linguistics

抽象语义表示到文本(AMR-to-Text)生成的任务是给定AMR图,生成相同语义表示的文本。可以把此任务当作一个从源端AMR图到目标端句子的机器翻译任务。目前存在的一些方法都在探索如何更好的对图结构进行建模。然而,它们都存在一个未限定的问题,因为在生成阶段许多句法的决策并不受语义图的约束,从而忽略了句子内部潜藏的句法信息。为了明确考虑这一不足,该文提出一种直接而有效的方法,显示的在AMR-to-Text生成的任务中融入句法信息,并在Transformer和目前该任务最优性能的模型上进行了实验。实验结果表明,在现存的两份标准英文数据集LDC2018E86和LDC2017T10上,都取得了显著的提升,达到了新的最高性能。

2019

pdf bib
Modeling Graph Structure in Transformer for Better AMR-to-Text Generation
Jie Zhu | Junhui Li | Muhua Zhu | Longhua Qian | Min Zhang | Guodong Zhou
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Recent studies on AMR-to-text generation often formalize the task as a sequence-to-sequence (seq2seq) learning problem by converting an Abstract Meaning Representation (AMR) graph into a word sequences. Graph structures are further modeled into the seq2seq framework in order to utilize the structural information in the AMR graphs. However, previous approaches only consider the relations between directly connected concepts while ignoring the rich structure in AMR graphs. In this paper we eliminate such a strong limitation and propose a novel structure-aware self-attention approach to better model the relations between indirectly connected concepts in the state-of-the-art seq2seq model, i.e. the Transformer. In particular, a few different methods are explored to learn structural representations between two concepts. Experimental results on English AMR benchmark datasets show that our approach significantly outperforms the state-of-the-art with 29.66 and 31.82 BLEU scores on LDC2015E86 and LDC2017T10, respectively. To the best of our knowledge, these are the best results achieved so far by supervised models on the benchmarks.