Muhua Zhu


2021

pdf bib
利用图像描述与知识图谱增强表示的视觉问答(Exploiting Image Captions and External Knowledge as Representation Enhancement for Visual Question Answering)
Gechao Wang (王屹超) | Muhua Zhu (朱慕华) | Chen Xu (许晨) | Yan Zhang (张琰) | Huizhen Wang (王会珍) | Jingbo Zhu (朱靖波)
Proceedings of the 20th Chinese National Conference on Computational Linguistics

视觉问答作为多模态任务,需要深度理解图像和文本问题从而推理出答案。然而在许多情况下,仅在图像和问题上进行简单推理难以得到正确的答案,事实上还有其它有效的信息可以被利用,例如图像描述、外部知识等。针对以上问题,本文提出了利用图像描述和外部知识增强表示的视觉问答模型。该模型以问题为导向,基于协同注意力机制分别在图像和其描述上进行编码,并且利用知识图谱嵌入,将外部知识编码到模型当中,丰富了模型的特征表示,增强模型的推理能力。在OKVQA数据集上的实验结果表明本文方法相比基线系统有1.71%的准确率提升,与先前工作中的主流模型相比也有1.88%的准确率提升,证明了本文方法的有效性。

pdf bib
XLPT-AMR: Cross-Lingual Pre-Training via Multi-Task Learning for Zero-Shot AMR Parsing and Text Generation
Dongqin Xu | Junhui Li | Muhua Zhu | Min Zhang | Guodong Zhou
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Due to the scarcity of annotated data, Abstract Meaning Representation (AMR) research is relatively limited and challenging for languages other than English. Upon the availability of English AMR dataset and English-to- X parallel datasets, in this paper we propose a novel cross-lingual pre-training approach via multi-task learning (MTL) for both zeroshot AMR parsing and AMR-to-text generation. Specifically, we consider three types of relevant tasks, including AMR parsing, AMR-to-text generation, and machine translation. We hope that knowledge gained while learning for English AMR parsing and text generation can be transferred to the counterparts of other languages. With properly pretrained models, we explore four different finetuning methods, i.e., vanilla fine-tuning with a single task, one-for-all MTL fine-tuning, targeted MTL fine-tuning, and teacher-studentbased MTL fine-tuning. Experimental results on AMR parsing and text generation of multiple non-English languages demonstrate that our approach significantly outperforms a strong baseline of pre-training approach, and greatly advances the state of the art. In detail, on LDC2020T07 we have achieved 70.45%, 71.76%, and 70.80% in Smatch F1 for AMR parsing of German, Spanish, and Italian, respectively, while for AMR-to-text generation of the languages, we have obtained 25.69, 31.36, and 28.42 in BLEU respectively. We make our code available on github https://github.com/xdqkid/XLPT-AMR.

2020

pdf bib
Coupling Distant Annotation and Adversarial Training for Cross-Domain Chinese Word Segmentation
Ning Ding | Dingkun Long | Guangwei Xu | Muhua Zhu | Pengjun Xie | Xiaobin Wang | Haitao Zheng
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Fully supervised neural approaches have achieved significant progress in the task of Chinese word segmentation (CWS). Nevertheless, the performance of supervised models always drops gravely if the domain shifts due to the distribution gap across domains and the out of vocabulary (OOV) problem. In order to simultaneously alleviate the issues, this paper intuitively couples distant annotation and adversarial training for cross-domain CWS. 1) We rethink the essence of “Chinese words” and design an automatic distant annotation mechanism, which does not need any supervision or pre-defined dictionaries on the target domain. The method could effectively explore domain-specific words and distantly annotate the raw texts for the target domain. 2) We further develop a sentence-level adversarial training procedure to perform noise reduction and maximum utilization of the source domain information. Experiments on multiple real-world datasets across various domains show the superiority and robustness of our model, significantly outperforming previous state-of-the-arts cross-domain CWS methods.

pdf bib
Improving AMR Parsing with Sequence-to-Sequence Pre-training
Dongqin Xu | Junhui Li | Muhua Zhu | Min Zhang | Guodong Zhou
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

In the literature, the research on abstract meaning representation (AMR) parsing is much restricted by the size of human-curated dataset which is critical to build an AMR parser with good performance. To alleviate such data size restriction, pre-trained models have been drawing more and more attention in AMR parsing. However, previous pre-trained models, like BERT, are implemented for general purpose which may not work as expected for the specific task of AMR parsing. In this paper, we focus on sequence-to-sequence (seq2seq) AMR parsing and propose a seq2seq pre-training approach to build pre-trained models in both single and joint way on three relevant tasks, i.e., machine translation, syntactic parsing, and AMR parsing itself. Moreover, we extend the vanilla fine-tuning method to a multi-task learning fine-tuning method that optimizes for the performance of AMR parsing while endeavors to preserve the response of pre-trained models. Extensive experimental results on two English benchmark datasets show that both the single and joint pre-trained models significantly improve the performance (e.g., from 71.5 to 80.2 on AMR 2.0), which reaches the state of the art. The result is very encouraging since we achieve this with seq2seq models rather than complex models. We make our code and model available at https:// github.com/xdqkid/S2S-AMR-Parser.

2019

pdf bib
Modeling Graph Structure in Transformer for Better AMR-to-Text Generation
Jie Zhu | Junhui Li | Muhua Zhu | Longhua Qian | Min Zhang | Guodong Zhou
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Recent studies on AMR-to-text generation often formalize the task as a sequence-to-sequence (seq2seq) learning problem by converting an Abstract Meaning Representation (AMR) graph into a word sequences. Graph structures are further modeled into the seq2seq framework in order to utilize the structural information in the AMR graphs. However, previous approaches only consider the relations between directly connected concepts while ignoring the rich structure in AMR graphs. In this paper we eliminate such a strong limitation and propose a novel structure-aware self-attention approach to better model the relations between indirectly connected concepts in the state-of-the-art seq2seq model, i.e. the Transformer. In particular, a few different methods are explored to learn structural representations between two concepts. Experimental results on English AMR benchmark datasets show that our approach significantly outperforms the state-of-the-art with 29.66 and 31.82 BLEU scores on LDC2015E86 and LDC2017T10, respectively. To the best of our knowledge, these are the best results achieved so far by supervised models on the benchmarks.

2017

pdf bib
Modeling Source Syntax for Neural Machine Translation
Junhui Li | Deyi Xiong | Zhaopeng Tu | Muhua Zhu | Min Zhang | Guodong Zhou
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Even though a linguistics-free sequence to sequence model in neural machine translation (NMT) has certain capability of implicitly learning syntactic information of source sentences, this paper shows that source syntax can be explicitly incorporated into NMT effectively to provide further improvements. Specifically, we linearize parse trees of source sentences to obtain structural label sequences. On the basis, we propose three different sorts of encoders to incorporate source syntax into NMT: 1) Parallel RNN encoder that learns word and label annotation vectors parallelly; 2) Hierarchical RNN encoder that learns word and label annotation vectors in a two-level hierarchy; and 3) Mixed RNN encoder that stitchingly learns word and label annotation vectors over sequences where words and labels are mixed. Experimentation on Chinese-to-English translation demonstrates that all the three proposed syntactic encoders are able to improve translation accuracy. It is interesting to note that the simplest RNN encoder, i.e., Mixed RNN encoder yields the best performance with an significant improvement of 1.4 BLEU points. Moreover, an in-depth analysis from several perspectives is provided to reveal how source syntax benefits NMT.

2016

pdf bib
SoNLP-DP System for ConLL-2016 English Shallow Discourse Parsing
Fang Kong | Sheng Li | Junhui Li | Muhua Zhu | Guodong Zhou
Proceedings of the CoNLL-16 shared task

pdf bib
SoNLP-DP System for ConLL-2016 Chinese Shallow Discourse Parsing
Junhui Li | Fang Kong | Sheng Li | Muhua Zhu | Guodong Zhou
Proceedings of the CoNLL-16 shared task

2015

pdf bib
Improving Semantic Parsing with Enriched Synchronous Context-Free Grammar
Junhui Li | Muhua Zhu | Wei Lu | Guodong Zhou
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf bib
NiuParser: A Chinese Syntactic and Semantic Parsing Toolkit
Jingbo Zhu | Muhua Zhu | Qiang Wang | Tong Xiao
Proceedings of ACL-IJCNLP 2015 System Demonstrations

2013

pdf bib
Fast and Accurate Shift-Reduce Constituent Parsing
Muhua Zhu | Yue Zhang | Wenliang Chen | Min Zhang | Jingbo Zhu
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2012

pdf bib
Exploiting Lexical Dependencies from Large-Scale Data for Better Shift-Reduce Constituency Parsing
Muhua Zhu | Jingbo Zhu | Huizhen Wang
Proceedings of COLING 2012

2011

pdf bib
Better Automatic Treebank Conversion Using A Feature-Based Approach
Muhua Zhu | Jingbo Zhu | Minghan Hu
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

2010

pdf bib
Boosting-Based System Combination for Machine Translation
Tong Xiao | Jingbo Zhu | Muhua Zhu | Huizhen Wang
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

pdf bib
Heterogeneous Parsing via Collaborative Decoding
Muhua Zhu | Jingbo Zhu | Tong Xiao
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

pdf bib
An Empirical Study of Translation Rule Extraction with Multiple Parsers
Tong Xiao | Jingbo Zhu | Hao Zhang | Muhua Zhu
Coling 2010: Posters

pdf bib
Automatic Treebank Conversion via Informed Decoding
Muhua Zhu | Jingbo Zhu
Coling 2010: Posters

pdf bib
High OOV-Recall Chinese Word Segmenter
Xiaoming Xu | Muhua Zhu | Xiaoxu Fei | Jingbo Zhu
CIPS-SIGHAN Joint Conference on Chinese Language Processing

2009

pdf bib
Chinese-English Organization Name Translation Based on Correlative Expansion
Feiliang Ren | Muhua Zhu | Huizhen Wang | Jingbo Zhu
Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration (NEWS 2009)

2006

pdf bib
Exploring Distributional Similarity Based Models for Query Spelling Correction
Mu Li | Muhua Zhu | Yang Zhang | Ming Zhou
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics

pdf bib
Designing Special Post-Processing Rules for SVM-Based Chinese Word Segmentation
Muhua Zhu | Yilin Wang | Zhenxing Wang | Huizhen Wang | Jingbo Zhu
Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing