Guodong Zhou

Also published as: GuoDong Zhou


2024

pdf bib
Cross-domain NER with Generated Task-Oriented Knowledge: An Empirical Study from Information Density Perspective
Zhihao Zhang | Sophia Yat Mei Lee | Junshuang Wu | Dong Zhang | Shoushan Li | Erik Cambria | Guodong Zhou
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Cross-domain Named Entity Recognition (CDNER) is crucial for Knowledge Graph (KG) construction and natural language processing (NLP), enabling learning from source to target domains with limited data. Previous studies often rely on manually collected entity-relevant sentences from the web or attempt to bridge the gap between tokens and entity labels across domains. These approaches are time-consuming and inefficient, as these data are often weakly correlated with the target task and require extensive pre-training.To address these issues, we propose automatically generating task-oriented knowledge (GTOK) using large language models (LLMs), focusing on the reasoning process of entity extraction. Then, we employ task-oriented pre-training (TOPT) to facilitate domain adaptation. Additionally, current cross-domain NER methods often lack explicit explanations for their effectiveness. Therefore, we introduce the concept of information density to better evaluate the model’s effectiveness before performing entity recognition.We conduct systematic experiments and analyses to demonstrate the effectiveness of our proposed approach and the validity of using information density for model evaluation.

pdf bib
End-to-End Emotion Semantic Parsing
Xiaotong Jiang | Zhongqing Wang | Guodong Zhou
Findings of the Association for Computational Linguistics: ACL 2024

Emotion detection is the task of automatically associating one or more emotions with a text. The emotions are experienced, targeted, and caused by different semantic constituents. Therefore, it is necessary to incorporate these semantic constituents into the process of emotion detection. In this study, we propose a new task called emotion semantic parsing which aims to parse the emotion and semantic constituents into an abstract semantic tree structure. In particular, we design an end-to-end generation model to capture the relations between emotion and all the semantic constituents, and to generate them jointly. Furthermore, we employ a task decomposition strategy to capture the semantic relation among these constituents in a more cognitive and structural way. Experimental results demonstrate the importance of the proposed task, and indicate the proposed model gives superior performance compared to other models.

pdf bib
Transition-based Opinion Generation for Aspect-based Sentiment Analysis
Tianlai Ma | Zhongqing Wang | Guodong Zhou
Findings of the Association for Computational Linguistics: ACL 2024

Recently, the use of pre-trained generation models for extracting sentiment elements has resulted in significant advancements in aspect-based sentiment analysis benchmarks. However, these approaches often overlook the importance of explicitly modeling structure among sentiment elements. To address this limitation, we present a study that aims to integrate general pre-trained sequence-to-sequence language models with a structure-aware transition-based approach. Therefore, we propose a transition system for opinion tree generation, designed to better exploit pre-trained language models for structured fine-tuning. Our proposed transition system ensures the structural integrity of the generated opinion tree. By leveraging pre-trained generation models and simplifying the transition set, we are able to maximize the accuracy of opinion tree generation. Extensive experiments show that our model significantly advances the state-of-the-art performance on several benchmark datasets. In addition, the empirical studies also indicate that the proposed opinion tree generation with transition system is more effective in capturing the sentiment structure than other generation models.

pdf bib
ChatASU: Evoking LLM’s Reflexion to Truly Understand Aspect Sentiment in Dialogues
Yiding Liu | Jingjing Wang | Jiamin Luo | Tao Zeng | Guodong Zhou
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Aspect Sentiment Understanding (ASU) in interactive scenarios (e.g., Question-Answering and Dialogue) has attracted ever-more interest in recent years and achieved important progresses. However, existing studies on interactive ASU largely ignore the coreference issue for opinion targets (i.e., aspects), while this phenomenon is ubiquitous in interactive scenarios especially dialogues, limiting the ASU performance. Recently, large language models (LLMs) shows the powerful ability to integrate various NLP tasks with the chat paradigm. In this way, this paper proposes a new Chat-based Aspect Sentiment Understanding (ChatASU) task, aiming to explore LLMs’ ability in understanding aspect sentiments in dialogue scenarios. Particularly, this ChatASU task introduces a sub-task, i.e., Aspect Chain Reasoning (ACR) task, to address the aspect coreference issue. On this basis, we propose a Trusted Self-reflexion Approach (TSA) with ChatGLM as backbone to ChatASU. Specifically, this TSA treats the ACR task as an auxiliary task to boost the performance of the primary ASU task, and further integrates trusted learning into reflexion mechanisms to alleviate the LLMs-intrinsic factual hallucination problem in TSA. Furthermore, a high-quality ChatASU dataset is annotated to evaluate TSA, and extensive experiments show that our proposed TSA can significantly outperform several state-of-the-art baselines, justifying the effectiveness of TSA to ChatASU and the importance of considering the coreference and hallucination issues in ChatASU.

pdf bib
Demonstration Retrieval-Augmented Generative Event Argument Extraction
Shiming He | Yu Hong | Shuai Yang | Jianmin Yao | Guodong Zhou
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

We tackle Event Argument Extraction (EAE) in the manner of template-based generation. Based on our exploration of generative EAE, it suffers from several issues, such as multiple arguments of one role, generating words out of context and inconsistency with prescribed format. We attribute it to the weakness of following complex input prompts. To address these problems, we propose the demonstration retrieval-augmented generative EAE (DRAGEAE), containing two components: event knowledge-injected generator (EKG) and demonstration retriever (DR). EKG employs event knowledge prompts to capture role dependencies and semantics. DR aims to search informative demonstrations from training data, facilitating the conditional generation of EKG. To train DR, we use the probability-based rankings from large language models (LLMs) as supervised signals. Experimental results on ACE-2005, RAMS and WIKIEVENTS demonstrate that our method outperforms all strong baselines and it can be generalized to various datasets. Further analysis is conducted to discuss the impact of diverse LLMs and prove that our model alleviates the above issues.

pdf bib
How to Understand “Support”? An Implicit-enhanced Causal Inference Approach for Weakly-supervised Phrase Grounding
Jiamin Luo | Jianing Zhao | Jingjing Wang | Guodong Zhou
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Weakly-supervised Phrase Grounding (WPG) is an emerging task of inferring the fine-grained phrase-region matching, while merely leveraging the coarse-grained sentence-image pairs for training. However, existing studies on WPG largely ignore the implicit phrase-region matching relations, which are crucial for evaluating the capability of models in understanding the deep multimodal semantics. To this end, this paper proposes an Implicit-Enhanced Causal Inference (IECI) approach to address the challenges of modeling the implicit relations and highlighting them beyond the explicit. Specifically, this approach leverages both the intervention and counterfactual techniques to tackle the above two challenges respectively. Furthermore, a high-quality implicit-enhanced dataset is annotated to evaluate IECI and detailed evaluations show the great advantages of IECI over the state-of-the-art baselines. Particularly, we observe an interesting finding that IECI outperforms the advanced multimodal LLMs by a large margin on this implicit-enhanced dataset, which may facilitate more research to evaluate the multimodal LLMs in this direction.

pdf bib
Leveraging AMR Graph Structure for Better Sequence-to-Sequence AMR Parsing
Linyu Fan | Wu Wu Yiheng | Jun Xie | Junhui Li | Fang Kong | Guodong Zhou
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Thanks to the development of pre-trained sequence-to-sequence (seq2seq) models (e.g., BART), recent studies on AMR parsing often regard this task as a seq2seq translation problem by linearizing AMR graphs into AMR token sequences in pre-processing and recovering AMR graphs from sequences in post-processing. Seq2seq AMR parsing is a relatively simple paradigm but it unavoidably loses structural information among AMR tokens. To compensate for the loss of structural information, in this paper we explicitly leverage AMR structure in the decoding phase. Given an AMR graph, we first project the structure in the graph into an AMR token graph, i.e., structure among AMR tokens in the linearized sequence. The structures for an AMR token could be divided into two parts: structure in prediction history and structure in future. Then we propose to model structure in prediction history via a graph attention network (GAT) and learn structure in future via a multi-task scheme, respectively. Experimental results show that our approach significantly outperforms a strong baseline and achieves performance with 85.5 ±0.1 and 84.2 ±0.1 Smatch scores on AMR 2.0 and AMR 3.0, respectively

pdf bib
Structure-aware Generation Model for Cross-Domain Aspect-based Sentiment Classification
Shichen Li | Zhongqing Wang | Yanzhi Xu | Guodong Zhou
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Employing pre-trained generation models for cross-domain aspect-based sentiment classification has recently led to large improvements. However, they ignore the importance of syntactic structures, which have shown appealing effectiveness in classification based models. Different from previous studies, efficiently encoding the syntactic structure in generation model is challenging because such models are pretrained on natural language, and modeling structured data may lead to catastrophic forgetting of distributional knowledge. In this study, we propose a novel structure-aware generation model to tackle this challenge. In particular, a prompt-driven strategy is designed to bridge the gap between different domains, by capturing implicit syntactic information from the input and output sides. Furthermore, the syntactic structure is explicitly encoded into the structure-aware generation model, which can effectively learn domain-irrelevant features based on syntactic pivot features. Empirical results demonstrate the effectiveness of the proposed structure-aware generation model over several strong baselines. The results also indicate the proposed model is capable of leveraging the input syntactic structure into the generation model.

pdf bib
TopicDiff: A Topic-enriched Diffusion Approach for Multimodal Conversational Emotion Detection
Jiamin Luo | Jingjing Wang | Guodong Zhou
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Multimodal Conversational Emotion (MCE) detection, generally spanning across the acoustic, vision and language modalities, has attracted increasing interest in the multimedia community. Previous studies predominantly focus on learning contextual information in conversations with only a few considering the topic information in single language modality, while always neglecting the acoustic and vision topic information. On this basis, we propose a model-agnostic Topic-enriched Diffusion (TopicDiff) approach for capturing multimodal topic information in MCE tasks. Particularly, we integrate the diffusion model into neural topic model to alleviate the diversity deficiency problem of neural topic model in capturing topic information. Detailed evaluations demonstrate the significant improvements of TopicDiff over the state-of-the-art MCE baselines, justifying the importance of multimodal topic information to MCE and the effectiveness of TopicDiff in capturing such information. Furthermore, we observe an interesting finding that the topic information in acoustic and vision is more discriminative and robust compared to the language.

2023

pdf bib
Opinion Tree Parsing for Aspect-based Sentiment Analysis
Xiaoyi Bao | Xiaotong Jiang | Zhongqing Wang | Yue Zhang | Guodong Zhou
Findings of the Association for Computational Linguistics: ACL 2023

Extracting sentiment elements using pre-trained generative models has recently led to large improvements in aspect-based sentiment analysis benchmarks. These models avoid explicit modeling of structure between sentiment elements, which are succinct yet lack desirable properties such as structure well-formedness guarantees or built-in elements alignments. In this study, we propose an opinion tree parsing model, aiming to parse all the sentiment elements from an opinion tree, which can explicitly reveal a more comprehensive and complete aspect-level sentiment structure. In particular, we first introduce a novel context-free opinion grammar to normalize the sentiment structure. We then employ a neural chart-based opinion tree parser to fully explore the correlations among sentiment elements and parse them in the opinion tree form. Extensive experiments show the superiority of our proposed model and the capacity of the opinion tree parser with the proposed context-free opinion grammar. More importantly, our model is much faster than previous models.

pdf bib
Exploring Graph Pre-training for Aspect-based Sentiment Analysis
Xiaoyi Bao | Zhongqing Wang | Guodong Zhou
Findings of the Association for Computational Linguistics: EMNLP 2023

Existing studies tend to extract the sentiment elements in a generative manner in order to avoid complex modeling. Despite their effectiveness, they ignore importance of the relationships between sentiment elements that could be crucial, making the large pre-trained generative models sub-optimal for modeling sentiment knowledge. Therefore, we introduce two pre-training paradigms to improve the generation model by exploring graph pre-training that targeting to strengthen the model in capturing the elements’ relationships. Specifically, We first employ an Element-level Graph Pre-training paradigm, which is designed to improve the structure awareness of the generative model. Then, we design a Task Decomposition Pre-training paradigm to make the generative model generalizable and robust against various irregular sentiment quadruples. Extensive experiments show the superiority of our proposed method, validate the correctness of our motivation.

pdf bib
Low-Resource Comparative Opinion Quintuple Extraction by Data Augmentation with Prompting
Qingting Xu | Yu Hong | Fubang Zhao | Kaisong Song | Yangyang Kang | Jiaxiang Chen | Guodong Zhou
Findings of the Association for Computational Linguistics: EMNLP 2023

Comparative Opinion Quintuple Extraction (COQE) aims to predict comparative opinion quintuples from comparative sentences. These quintuples include subject, object, shareable aspect, comparative opinion, and preference. The existing pipeline-based COQE method fails in error propagation. In addition, the complexity and insufficient amounts of annotated data hinder the performance of COQE models. In this paper, we introduce a novel approach called low-resource comparative opinion quintuple extraction by Data Augmentation with Prompting (DAP). Firstly, we present an end-to-end model architecture better suited to the data augmentation method from triplets to quintuples and can effectively avoid error propagation. Additionally, we introduce a data-centric augmentation approach that leverages the robust generative abilities of ChatGPT and integrates transfer learning techniques. Experimental results over three datasets (Camera, Car, Ele) demonstrate that our approach yields substantial improvements and achieves state-of-the-art results. The source code and data are publicly released at: https://github.com/qtxu-nlp/COQE-DAP.

pdf bib
Enhancing Reasoning Capabilities by Instruction Learning and Chain-of-Thoughts for Implicit Discourse Relation Recognition
Yuxiang Lu | Yu Hong | Zhipang Wang | Guodong Zhou
Findings of the Association for Computational Linguistics: EMNLP 2023

The aim of implicit discourse relation recognition is to comprehend the sense of connection between two arguments. In this work, we present a classification method that is solely based on generative models. Our proposed approach employs a combination of instruction templates and in-context learning to refine the generative model for effectively addressing the implicit discourse relation recognition task. Furthermore, we utilize Chain-of-Thoughts to partition the inference process into a sequence of three successive stages. This strategy enables us to fully utilize the autoregressive generative model’s potential for knowledge acquisition and inference, ultimately leading to enhanced performance on this natural language understanding task. The results of our experiments, evaluated on benchmark datasets PDTB 2.0, PDTB 3.0, and the CoNLL16 shared task, demonstrate superior performance compared to previous state-of-the-art models.

pdf bib
A Unified Framework for Synaesthesia Analysis
Kun Sheng | Zhongqing Wang | Qingqing Zhao | Xiaotong Jiang | Guodong Zhou
Findings of the Association for Computational Linguistics: EMNLP 2023

Synaesthesia refers to the description of perceptions in one sensory modality through concepts from other modalities. It involves not only a linguistic phenomenon, but also a cognitive phenomenon structuring human thought and action, which makes understanding it challenging. As a means of cognition, synaesthesia is rendered by more than sensory modalities, cue and stimulus can also play an important role in expressing and understanding it. In addition, understanding synaesthesia involves many cognitive efforts, such as identifying the semantic relationship between sensory words and modalities. Therefore, we propose a unified framework focusing on annotating all kinds of synaesthetic elements and fully exploring the relationship among them. In particular, we introduce a new annotation scheme, including sensory modalities as well as their cues and stimuli, which facilitate understanding synaesthetic information collectively. We further design a structure generation model to capture the relations among synaesthetic elements and generate them jointly. Through extensive experiments, the importance of proposed dataset can be verified by the statistics and progressive performances. In addition, our proposed model yields state-of-the-art results, demonstrating its effectiveness.

2022

pdf bib
Semantic Simplification for Sentiment Classification
Xiaotong Jiang | Zhongqing Wang | Guodong Zhou
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Recent work on document-level sentiment classification has shown that the sentiment in the original text is often hard to capture, since the sentiment is usually either expressed implicitly or shifted due to the occurrences of negation and rhetorical words. To this end, we enhance the original text with a sentiment-driven simplified clause to intensify its sentiment. The simplified clause shares the same opinion with the original text but expresses the opinion much more simply. Meanwhile, we employ Abstract Meaning Representation (AMR) for generating simplified clauses, since AMR explicitly provides core semantic knowledge, and potentially offers core concepts and explicit structures of original texts. Empirical studies show the effectiveness of our proposed model over several strong baselines. The results also indicate the importance of simplified clauses for sentiment classification.

pdf bib
Cross-Domain Sentiment Classification using Semantic Representation
Shichen Li | Zhongqing Wang | Xiaotong Jiang | Guodong Zhou
Findings of the Association for Computational Linguistics: EMNLP 2022

Previous studies on cross-domain sentiment classification depend on the pivot features or utilize the target data for representation learning, which ignore the semantic relevance between different domains. To this end, we exploit Abstract Meaning Representation (AMR) to help with cross-domain sentiment classification. Compared with the textual input, AMR reduces data sparsity and explicitly provides core semantic knowledge and correlations between different domains. In particular, we develop an algorithm to construct a sentiment-driven semantic graph from sentence-level AMRs. We further design two strategies to linearize the semantic graph and propose a text-graph interaction model to fuse the text and semantic graph representations for cross-domain sentiment classification. Empirical studies show the effectiveness of our proposed model over several strong baselines. The results also indicate the importance of the proposed sentiment-driven semantic graph for cross-domain sentiment classification.

pdf bib
Discourse Parsing Enhanced by Discourse Dependence Perception
Yuqing Xing | Longyin Zhang | Fang Kong | Guodong Zhou
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

In recent years, top-down neural models have achieved significant success in text-level discourse parsing. Nevertheless, they still suffer from the top-down error propagation issue, especially when the performance on the upper-level tree nodes is terrible. In this research, we aim to learn from the correlations in between EDUs directly to shorten the hierarchical distance of the RST structure to alleviate the above problem. Specifically, we contribute a joint top-down framework that learns from both discourse dependency and constituency parsing through one shared encoder and two independent decoders. Moreover, we also explore a constituency-to-dependency conversion scheme tailored for the Chinese discourse corpus to ensure the high quality of the joint learning process. Our experimental results on CDTB show that the dependency information we use well heightens the understanding of the rhetorical structure, especially for the upper-level tree layers.

pdf bib
Document-level Event Factuality Identification via Machine Reading Comprehension Frameworks with Transfer Learning
Zhong Qian | Heng Zhang | Peifeng Li | Qiaoming Zhu | Guodong Zhou
Proceedings of the 29th International Conference on Computational Linguistics

Document-level Event Factuality Identification (DEFI) predicts the factuality of a specific event based on a document from which the event can be derived, which is a fundamental and crucial task in Natural Language Processing (NLP). However, most previous studies only considered sentence-level task and did not adopt document-level knowledge. Moreover, they modelled DEFI as a typical text classification task depending on annotated information heavily, and limited to the task-specific corpus only, which resulted in data scarcity. To tackle these issues, we propose a new framework formulating DEFI as Machine Reading Comprehension (MRC) tasks considering both Span-Extraction (Ext) and Multiple-Choice (Mch). Our model does not employ any other explicit annotated information, and utilizes Transfer Learning (TL) to extract knowledge from universal large-scale MRC corpora for cross-domain data augmentation. The empirical results on DLEFM corpus demonstrate that the proposed model outperforms several state-of-the-arts.

pdf bib
Unregulated Chinese-to-English Data Expansion Does NOT Work for Neural Event Detection
Zhongqiu Li | Yu Hong | Jie Wang | Shiming He | Jianmin Yao | Guodong Zhou
Proceedings of the 29th International Conference on Computational Linguistics

We leverage cross-language data expansion and retraining to enhance neural Event Detection (abbr., ED) on English ACE corpus. Machine translation is utilized for expanding English training set of ED from that of Chinese. However, experimental results illustrate that such strategy actually results in performance degradation. The survey of translations suggests that the mistakenly-aligned triggers in the expanded data negatively influences the retraining process. We refer this phenomenon to “trigger falsification”. To overcome the issue, we apply heuristic rules for regulating the expanded data, fixing the distracting samples that contain the falsified triggers. The supplementary experiments show that the rule-based regulation is beneficial, yielding the improvement of about 1.6% F1-score for ED. We additionally prove that, instead of transfer learning from the translated ED data, the straight data combination by random pouring surprisingly performs better.

pdf bib
Pre-trained Token-replaced Detection Model as Few-shot Learner
Zicheng Li | Shoushan Li | Guodong Zhou
Proceedings of the 29th International Conference on Computational Linguistics

Pre-trained masked language models have demonstrated remarkable ability as few-shot learners. In this paper, as an alternative, we propose a novel approach to few-shot learning with pre-trained token-replaced detection models like ELECTRA. In this approach, we reformulate a classification or a regression task as a token-replaced detection problem. Specifically, we first define a template and label description words for each task and put them into the input to form a natural language prompt. Then, we employ the pre-trained token-replaced detection model to predict which label description word is the most original (i.e., least replaced) among all label description words in the prompt. A systematic evaluation on 16 datasets demonstrates that our approach outperforms few-shot learners with pre-trained masked language models in both one-sentence and two-sentence learning tasks.

pdf bib
Taking Actions Separately: A Bidirectionally-Adaptive Transfer Learning Method for Low-Resource Neural Machine Translation
Xiaolin Xing | Yu Hong | Minhan Xu | Jianmin Yao | Guodong Zhou
Proceedings of the 29th International Conference on Computational Linguistics

Training Neural Machine Translation (NMT) models suffers from sparse parallel data, in the infrequent translation scenarios towards low-resource source languages. The existing solutions primarily concentrate on the utilization of Parent-Child (PC) transfer learning. It transfers well-trained NMT models on high-resource languages (namely Parent NMT) to low-resource languages, so as to produce Child NMT models by fine-tuning. It has been carefully demonstrated that a variety of PC variants yield significant improvements for low-resource NMT. In this paper, we intend to enhance PC-based NMT by a bidirectionally-adaptive learning strategy. Specifically, we divide inner constituents (6 transformers) of Parent encoder into two “teams”, i.e., T1 and T2. During representation learning, T1 learns to encode low-resource languages conditioned on bilingual shareable latent space. Generative adversarial network and masked language modeling are used for space-shareable encoding. On the other hand, T2 is straightforwardly transferred to low-resource languages, and fine-tuned together with T1 for low-resource translation. Briefly, T1 and T2 take actions separately for different goals. The former aims to adapt to characteristics of low-resource languages during encoding, while the latter adapts to translation experiences learned from high-resource languages. We experiment on benchmark corpora SETIMES, conducting low-resource NMT for Albanian (Sq), Macedonian (Mk), Croatian (Hr) and Romanian (Ro). Experimental results show that our method yields substantial improvements, which allows the NMT performance to reach BLEU4-scores of 62.24%, 56.93%, 50.53% and 54.65% for Sq, Mk, Hr and Ro, respectively.

pdf bib
One-Teacher and Multiple-Student Knowledge Distillation on Sentiment Classification
Xiaoqin Chang | Sophia Yat Mei Lee | Suyang Zhu | Shoushan Li | Guodong Zhou
Proceedings of the 29th International Conference on Computational Linguistics

Knowledge distillation is an effective method to transfer knowledge from a large pre-trained teacher model to a compacted student model. However, in previous studies, the distilled student models are still large and remain impractical in highly speed-sensitive systems (e.g., an IR system). In this study, we aim to distill a deep pre-trained model into an extremely compacted shallow model like CNN. Specifically, we propose a novel one-teacher and multiple-student knowledge distillation approach to distill a deep pre-trained teacher model into multiple shallow student models with ensemble learning. Moreover, we leverage large-scale unlabeled data to improve the performance of students. Empirical studies on three sentiment classification tasks demonstrate that our approach achieves better results with much fewer parameters (0.9%-18%) and extremely high speedup ratios (100X-1000X).

2021

pdf bib
XLPT-AMR: Cross-Lingual Pre-Training via Multi-Task Learning for Zero-Shot AMR Parsing and Text Generation
Dongqin Xu | Junhui Li | Muhua Zhu | Min Zhang | Guodong Zhou
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Due to the scarcity of annotated data, Abstract Meaning Representation (AMR) research is relatively limited and challenging for languages other than English. Upon the availability of English AMR dataset and English-to- X parallel datasets, in this paper we propose a novel cross-lingual pre-training approach via multi-task learning (MTL) for both zeroshot AMR parsing and AMR-to-text generation. Specifically, we consider three types of relevant tasks, including AMR parsing, AMR-to-text generation, and machine translation. We hope that knowledge gained while learning for English AMR parsing and text generation can be transferred to the counterparts of other languages. With properly pretrained models, we explore four different finetuning methods, i.e., vanilla fine-tuning with a single task, one-for-all MTL fine-tuning, targeted MTL fine-tuning, and teacher-studentbased MTL fine-tuning. Experimental results on AMR parsing and text generation of multiple non-English languages demonstrate that our approach significantly outperforms a strong baseline of pre-training approach, and greatly advances the state of the art. In detail, on LDC2020T07 we have achieved 70.45%, 71.76%, and 70.80% in Smatch F1 for AMR parsing of German, Spanish, and Italian, respectively, while for AMR-to-text generation of the languages, we have obtained 25.69, 31.36, and 28.42 in BLEU respectively. We make our code available on github https://github.com/xdqkid/XLPT-AMR.

pdf bib
Adversarial Learning for Discourse Rhetorical Structure Parsing
Longyin Zhang | Fang Kong | Guodong Zhou
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Text-level discourse rhetorical structure (DRS) parsing is known to be challenging due to the notorious lack of training data. Although recent top-down DRS parsers can better leverage global document context and have achieved certain success, the performance is still far from perfect. To our knowledge, all previous DRS parsers make local decisions for either bottom-up node composition or top-down split point ranking at each time step, and largely ignore DRS parsing from the global view point. Obviously, it is not sufficient to build an entire DRS tree only through these local decisions. In this work, we present our insight on evaluating the pros and cons of the entire DRS tree for global optimization. Specifically, based on recent well-performing top-down frameworks, we introduce a novel method to transform both gold standard and predicted constituency trees into tree diagrams with two color channels. After that, we learn an adversarial bot between gold and fake tree diagrams to estimate the generated DRS trees from a global perspective. We perform experiments on both RST-DT and CDTB corpora and use the original Parseval for performance evaluation. The experimental results show that our parser can substantially improve the performance when compared with previous state-of-the-art parsers.

pdf bib
More than Text: Multi-modal Chinese Word Segmentation
Dong Zhang | Zheng Hu | Shoushan Li | Hanqian Wu | Qiaoming Zhu | Guodong Zhou
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

Chinese word segmentation (CWS) is undoubtedly an important basic task in natural language processing. Previous works only focus on the textual modality, but there are often audio and video utterances (such as news broadcast and face-to-face dialogues), where textual, acoustic and visual modalities normally exist. To this end, we attempt to combine the multi-modality (mainly the converted text and actual voice information) to perform CWS. In this paper, we annotate a new dataset for CWS containing text and audio. Moreover, we propose a time-dependent multi-modal interactive model based on Transformer framework to integrate multi-modal information for word sequence labeling. The experimental results on three different training sets show the effectiveness of our approach with fusing text and audio.

pdf bib
Feature-level Incongruence Reduction for Multimodal Translation
Zhifeng Li | Yu Hong | Yuchen Pan | Jian Tang | Jianmin Yao | Guodong Zhou
Proceedings of the Second Workshop on Advances in Language and Vision Research

Caption translation aims to translate image annotations (captions for short). Recently, Multimodal Neural Machine Translation (MNMT) has been explored as the essential solution. Besides of linguistic features in captions, MNMT allows visual(image) features to be used. The integration of multimodal features reinforces the semantic representation and considerably improves translation performance. However, MNMT suffers from the incongruence between visual and linguistic features. To overcome the problem, we propose to extend MNMT architecture with a harmonization network, which harmonizes multimodal features(linguistic and visual features)by unidirectional modal space conversion. It enables multimodal translation to be carried out in a seemingly monomodal translation pipeline. We experiment on the golden Multi30k-16 and 17. Experimental results show that, compared to the baseline,the proposed method yields the improvements of 2.2% BLEU for the scenario of translating English captions into German (En→De) at best,7.6% for the case of English-to-French translation(En→Fr) and 1.5% for English-to-Czech(En→Cz). The utilization of harmonization network leads to the competitive performance to the-state-of-the-art.

pdf bib
CVAE-based Re-anchoring for Implicit Discourse Relation Classification
Zujun Dou | Yu Hong | Yu Sun | Guodong Zhou
Findings of the Association for Computational Linguistics: EMNLP 2021

Training implicit discourse relation classifiers suffers from data sparsity. Variational AutoEncoder (VAE) appears to be the proper solution. It is because ideally VAE is capable of generating inexhaustible varying samples, and this facilitates selective data augmentation. However, our experiments show that coupling VAE with the RoBERTa-based classifier results in severe performance degradation. We ascribe the unusual phenomenon to erroneous sampling that would happen when VAE pursued variations. To overcome the problem, we develop a re-anchoring strategy, where Conditional VAE (CVAE) is used for estimating the risk of erroneous sampling, and meanwhile migrating the anchor to reduce the risk. The test results on PDTB v2.0 illustrate that, compared to the RoBERTa-based baseline, re-anchoring yields substantial improvements. Besides, we observe that re-anchoring can cooperate with other auxiliary strategies (transfer learning and interactive attention mechanism) to further improve the baseline, obtaining the F-scores of about 55%, 63%, 80% and 44% for the four main relation types (Comparison, Contingency, Expansion, Temporality) in the binary classification (Yes/No) scenario.

pdf bib
EDTC: A Corpus for Discourse-Level Topic Chain Parsing
Longyin Zhang | Xin Tan | Fang Kong | Guodong Zhou
Findings of the Association for Computational Linguistics: EMNLP 2021

Discourse analysis has long been known to be fundamental in natural language processing. In this research, we present our insight on discourse-level topic chain (DTC) parsing which aims at discovering new topics and investigating how these topics evolve over time within an article. To address the lack of data, we contribute a new discourse corpus with DTC-style dependency graphs annotated upon news articles. In particular, we ensure the high reliability of the corpus by utilizing a two-step annotation strategy to build the data and filtering out the annotations with low confidence scores. Based on the annotated corpus, we introduce a simple yet robust system for automatic discourse-level topic chain parsing.

pdf bib
Coupling Context Modeling with Zero Pronoun Recovering for Document-Level Natural Language Generation
Xin Tan | Longyin Zhang | Guodong Zhou
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Natural language generation (NLG) tasks on pro-drop languages are known to suffer from zero pronoun (ZP) problems, and the problems remain challenging due to the scarcity of ZP-annotated NLG corpora. In this case, we propose a highly adaptive two-stage approach to couple context modeling with ZP recovering to mitigate the ZP problem in NLG tasks. Notably, we frame the recovery process in a task-supervised fashion where the ZP representation recovering capability is learned during the NLG task learning process, thus our method does not require NLG corpora annotated with ZPs. For system enhancement, we learn an adversarial bot to adjust our model outputs to alleviate the error propagation caused by mis-recovered ZPs. Experiments on three document-level NLG tasks, i.e., machine translation, question answering, and summarization, show that our approach can improve the performance to a great extent, and the improvement on pronoun translation is very impressive.

pdf bib
Joint Multi-modal Aspect-Sentiment Analysis with Auxiliary Cross-modal Relation Detection
Xincheng Ju | Dong Zhang | Rong Xiao | Junhui Li | Shoushan Li | Min Zhang | Guodong Zhou
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Aspect terms extraction (ATE) and aspect sentiment classification (ASC) are two fundamental and fine-grained sub-tasks in aspect-level sentiment analysis (ALSA). In the textual analysis, joint extracting both aspect terms and sentiment polarities has been drawn much attention due to the better applications than individual sub-task. However, in the multi-modal scenario, the existing studies are limited to handle each sub-task independently, which fails to model the innate connection between the above two objectives and ignores the better applications. Therefore, in this paper, we are the first to jointly perform multi-modal ATE (MATE) and multi-modal ASC (MASC), and we propose a multi-modal joint learning approach with auxiliary cross-modal relation detection for multi-modal aspect-level sentiment analysis (MALSA). Specifically, we first build an auxiliary text-image relation detection module to control the proper exploitation of visual information. Second, we adopt the hierarchical framework to bridge the multi-modal connection between MATE and MASC, as well as separately visual guiding for each sub module. Finally, we can obtain all aspect-level sentiment polarities dependent on the jointly extracted specific aspects. Extensive experiments show the effectiveness of our approach against the joint textual approaches, pipeline and collapsed multi-modal approaches.

2020

pdf bib
Multimodal Topic-Enriched Auxiliary Learning for Depression Detection
Minghui An | Jingjing Wang | Shoushan Li | Guodong Zhou
Proceedings of the 28th International Conference on Computational Linguistics

From the perspective of health psychology, human beings with long-term and sustained negativity are highly possible to be diagnosed with depression. Inspired by this, we argue that the global topic information derived from user-generated contents (e.g., texts and images) is crucial to boost the performance of the depression detection task, though this information has been neglected by almost all previous studies on depression detection. To this end, we propose a new Multimodal Topic-enriched Auxiliary Learning (MTAL) approach, aiming at capturing the topic information inside different modalities (i.e., texts and images) for depression detection. Especially, in our approach, a modality-agnostic topic model is proposed to be capable of mining the topical clues from either the discrete textual signals or the continuous visual signals. On this basis, the topic modeling w.r.t. the two modalities are cast as two auxiliary tasks for improving the performance of the primary task (i.e., depression detection). Finally, the detailed evaluation demonstrates the great advantage of our MTAL approach to depression detection over the state-of-the-art baselines. This justifies the importance of the multimodal topic information to depression detection and the effectiveness of our approach in capturing such information.

pdf bib
Sentiment Forecasting in Dialog
Zhongqing Wang | Xiujun Zhu | Yue Zhang | Shoushan Li | Guodong Zhou
Proceedings of the 28th International Conference on Computational Linguistics

Sentiment forecasting in dialog aims to predict the polarity of next utterance to come, and can help speakers revise their utterances in sentimental utterances generation. However, the polarity of next utterance is normally hard to predict, due to the lack of content of next utterance (yet to come). In this study, we propose a Neural Sentiment Forecasting (NSF) model to address inherent challenges. In particular, we employ a neural simulation model to simulate the next utterance based on the context (previous utterances encountered). Moreover, we employ a sequence influence model to learn both pair-wise and seq-wise influence. Empirical studies illustrate the importance of proposed sentiment forecasting task, and justify the effectiveness of our NSF model over several strong baselines.

pdf bib
NUT-RC: Noisy User-generated Text-oriented Reading Comprehension
Rongtao Huang | Bowei Zou | Yu Hong | Wei Zhang | AiTi Aw | Guodong Zhou
Proceedings of the 28th International Conference on Computational Linguistics

Reading comprehension (RC) on social media such as Twitter is a critical and challenging task due to its noisy, informal, but informative nature. Most existing RC models are developed on formal datasets such as news articles and Wikipedia documents, which severely limit their performances when directly applied to the noisy and informal texts in social media. Moreover, these models only focus on a certain type of RC, extractive or generative, but ignore the integration of them. To well address these challenges, we come up with a noisy user-generated text-oriented RC model. In particular, we first introduce a set of text normalizers to transform the noisy and informal texts to the formal ones. Then, we integrate the extractive and the generative RC model by a multi-task learning mechanism and an answer selection module. Experimental results on TweetQA demonstrate that our NUT-RC model significantly outperforms the state-of-the-art social media-oriented RC models.

pdf bib
Interactively-Propagative Attention Learning for Implicit Discourse Relation Recognition
Huibin Ruan | Yu Hong | Yang Xu | Zhen Huang | Guodong Zhou | Min Zhang
Proceedings of the 28th International Conference on Computational Linguistics

We tackle implicit discourse relation recognition. Both self-attention and interactive-attention mechanisms have been applied for attention-aware representation learning, which improves the current discourse analysis models. To take advantages of the two attention mechanisms simultaneously, we develop a propagative attention learning model using a cross-coupled two-channel network. We experiment on Penn Discourse Treebank. The test results demonstrate that our model yields substantial improvements over the baselines (BiLSTM and BERT).

pdf bib
Aspect Sentiment Classification with Document-level Sentiment Preference Modeling
Xiao Chen | Changlong Sun | Jingjing Wang | Shoushan Li | Luo Si | Min Zhang | Guodong Zhou
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

In the literature, existing studies always consider Aspect Sentiment Classification (ASC) as an independent sentence-level classification problem aspect by aspect, which largely ignore the document-level sentiment preference information, though obviously such information is crucial for alleviating the information deficiency problem in ASC. In this paper, we explore two kinds of sentiment preference information inside a document, i.e., contextual sentiment consistency w.r.t. the same aspect (namely intra-aspect sentiment consistency) and contextual sentiment tendency w.r.t. all the related aspects (namely inter-aspect sentiment tendency). On the basis, we propose a Cooperative Graph Attention Networks (CoGAN) approach for cooperatively learning the aspect-related sentence representation. Specifically, two graph attention networks are leveraged to model above two kinds of document-level sentiment preference information respectively, followed by an interactive mechanism to integrate the two-fold preference. Detailed evaluation demonstrates the great advantage of the proposed approach to ASC over the state-of-the-art baselines. This justifies the importance of the document-level sentiment preference information to ASC and the effectiveness of our approach capturing such information.

pdf bib
A Top-down Neural Architecture towards Text-level Parsing of Discourse Rhetorical Structure
Longyin Zhang | Yuqing Xing | Fang Kong | Peifeng Li | Guodong Zhou
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Due to its great importance in deep natural language understanding and various down-stream applications, text-level parsing of discourse rhetorical structure (DRS) has been drawing more and more attention in recent years. However, all the previous studies on text-level discourse parsing adopt bottom-up approaches, which much limit the DRS determination on local information and fail to well benefit from global information of the overall discourse. In this paper, we justify from both computational and perceptive points-of-view that the top-down architecture is more suitable for text-level DRS parsing. On the basis, we propose a top-down neural architecture toward text-level DRS parsing. In particular, we cast discourse parsing as a recursive split point ranking task, where a split point is classified to different levels according to its rank and the elementary discourse units (EDUs) associated with it are arranged accordingly. In this way, we can determine the complete DRS as a hierarchical tree structure via an encoder-decoder with an internal stack. Experimentation on both the English RST-DT corpus and the Chinese CDTB corpus shows the great effectiveness of our proposed top-down approach towards text-level DRS parsing.

pdf bib
基于对话约束的回复生成研究(Research on Response Generation via Dialogue Constraints)
Mengyu Guan (管梦雨) | Zhongqing Wang (王中卿) | Shoushan Li (李寿山) | Guodong Zhou (周国栋)
Proceedings of the 19th Chinese National Conference on Computational Linguistics

现有的对话系统中存在着生成“好的”、“我不知道”等无意义的安全回复问题。日常对话中,对话者通常围绕特定的主题进行讨论且每句话都有明显的情感和意图。因此该文提出了基于对话约束的回复生成模型,即在Seq2Seq模型的基础上,结合对对话的主题、情感、意图的识别。该方法对生成回复的主题、情感和意图进行约束,从而生成具有合理的情感和意图且与对话主题相关的回复。实验证明,该文提出的方法能有效地提高生成回复的质量。

pdf bib
Improving AMR Parsing with Sequence-to-Sequence Pre-training
Dongqin Xu | Junhui Li | Muhua Zhu | Min Zhang | Guodong Zhou
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

In the literature, the research on abstract meaning representation (AMR) parsing is much restricted by the size of human-curated dataset which is critical to build an AMR parser with good performance. To alleviate such data size restriction, pre-trained models have been drawing more and more attention in AMR parsing. However, previous pre-trained models, like BERT, are implemented for general purpose which may not work as expected for the specific task of AMR parsing. In this paper, we focus on sequence-to-sequence (seq2seq) AMR parsing and propose a seq2seq pre-training approach to build pre-trained models in both single and joint way on three relevant tasks, i.e., machine translation, syntactic parsing, and AMR parsing itself. Moreover, we extend the vanilla fine-tuning method to a multi-task learning fine-tuning method that optimizes for the performance of AMR parsing while endeavors to preserve the response of pre-trained models. Extensive experimental results on two English benchmark datasets show that both the single and joint pre-trained models significantly improve the performance (e.g., from 71.5 to 80.2 on AMR 2.0), which reaches the state of the art. The result is very encouraging since we achieve this with seq2seq models rather than complex models. We make our code and model available at https://github.com/xdqkid/S2S-AMR-Parser.

pdf bib
Multi-modal Multi-label Emotion Detection with Modality and Label Dependence
Dong Zhang | Xincheng Ju | Junhui Li | Shoushan Li | Qiaoming Zhu | Guodong Zhou
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

As an important research issue in the natural language processing community, multi-label emotion detection has been drawing more and more attention in the last few years. However, almost all existing studies focus on one modality (e.g., textual modality). In this paper, we focus on multi-label emotion detection in a multi-modal scenario. In this scenario, we need to consider both the dependence among different labels (label dependence) and the dependence between each predicting label and different modalities (modality dependence). Particularly, we propose a multi-modal sequence-to-set approach to effectively model both kinds of dependence in multi-modal multi-label emotion detection. The detailed evaluation demonstrates the effectiveness of our approach.

2019

pdf bib
Adversarial Attention Modeling for Multi-dimensional Emotion Regression
Suyang Zhu | Shoushan Li | Guodong Zhou
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

In this paper, we propose a neural network-based approach, namely Adversarial Attention Network, to the task of multi-dimensional emotion regression, which automatically rates multiple emotion dimension scores for an input text. Especially, to determine which words are valuable for a particular emotion dimension, an attention layer is trained to weight the words in an input sequence. Furthermore, adversarial training is employed between two attention layers to learn better word weights via a discriminator. In particular, a shared attention layer is incorporated to learn public word weights between two emotion dimensions. Empirical evaluation on the EMOBANK corpus shows that our approach achieves notable improvements in r-values on both EMOBANK Reader’s and Writer’s multi-dimensional emotion regression tasks in all domains over the state-of-the-art baselines.

pdf bib
Topic Tensor Network for Implicit Discourse Relation Recognition in Chinese
Sheng Xu | Peifeng Li | Fang Kong | Qiaoming Zhu | Guodong Zhou
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

In the literature, most of the previous studies on English implicit discourse relation recognition only use sentence-level representations, which cannot provide enough semantic information in Chinese due to its unique paratactic characteristics. In this paper, we propose a topic tensor network to recognize Chinese implicit discourse relations with both sentence-level and topic-level representations. In particular, besides encoding arguments (discourse units) using a gated convolutional network to obtain sentence-level representations, we train a simplified topic model to infer the latent topic-level representations. Moreover, we feed the two pairs of representations to two factored tensor networks, respectively, to capture both the sentence-level interactions and topic-level relevance using multi-slice tensors. Experimentation on CDTB, a Chinese discourse corpus, shows that our proposed model significantly outperforms several state-of-the-art baselines in both micro and macro F1-scores.

pdf bib
Aspect Sentiment Classification Towards Question-Answering with Reinforced Bidirectional Attention Network
Jingjing Wang | Changlong Sun | Shoushan Li | Xiaozhong Liu | Luo Si | Min Zhang | Guodong Zhou
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

In the literature, existing studies on aspect sentiment classification (ASC) focus on individual non-interactive reviews. This paper extends the research to interactive reviews and proposes a new research task, namely Aspect Sentiment Classification towards Question-Answering (ASC-QA), for real-world applications. This new task aims to predict sentiment polarities for specific aspects from interactive QA style reviews. In particular, a high-quality annotated corpus is constructed for ASC-QA to facilitate corresponding research. On this basis, a Reinforced Bidirectional Attention Network (RBAN) approach is proposed to address two inherent challenges in ASC-QA, i.e., semantic matching between question and answer, and data noise. Experimental results demonstrate the great advantage of the proposed approach to ASC-QA against several state-of-the-art baselines.

pdf bib
Document-Level Event Factuality Identification via Adversarial Neural Network
Zhong Qian | Peifeng Li | Qiaoming Zhu | Guodong Zhou
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

Document-level event factuality identification is an important subtask in event factuality and is crucial for discourse understanding in Natural Language Processing (NLP). Previous studies mainly suffer from the scarcity of suitable corpus and effective methods. To solve these two issues, we first construct a corpus annotated with both document- and sentence-level event factuality information on both English and Chinese texts. Then we present an LSTM neural network based on adversarial training with both intra- and inter-sequence attentions to identify document-level event factuality. Experimental results show that our neural network model can outperform various baselines on the constructed corpus.

pdf bib
Hierarchical Modeling of Global Context for Document-Level Neural Machine Translation
Xin Tan | Longyin Zhang | Deyi Xiong | Guodong Zhou
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Document-level machine translation (MT) remains challenging due to the difficulty in efficiently using document context for translation. In this paper, we propose a hierarchical model to learn the global context for document-level neural machine translation (NMT). This is done through a sentence encoder to capture intra-sentence dependencies and a document encoder to model document-level inter-sentence consistency and coherence. With this hierarchical architecture, we feedback the extracted global document context to each word in a top-down fashion to distinguish different translations of a word according to its specific surrounding context. In addition, since large-scale in-domain document-level parallel corpora are usually unavailable, we use a two-step training strategy to take advantage of a large-scale corpus with out-of-domain parallel sentence pairs and a small-scale corpus with in-domain parallel document pairs to achieve the domain adaptability. Experimental results on several benchmark corpora show that our proposed model can significantly improve document-level translation performance over several strong NMT baselines.

pdf bib
A Discrete CVAE for Response Generation on Short-Text Conversation
Jun Gao | Wei Bi | Xiaojiang Liu | Junhui Li | Guodong Zhou | Shuming Shi
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Neural conversation models such as encoder-decoder models are easy to generate bland and generic responses. Some researchers propose to use the conditional variational autoencoder (CVAE) which maximizes the lower bound on the conditional log-likelihood on a continuous latent variable. With different sampled latent variables, the model is expected to generate diverse responses. Although the CVAE-based models have shown tremendous potential, their improvement of generating high-quality responses is still unsatisfactory. In this paper, we introduce a discrete latent variable with an explicit semantic meaning to improve the CVAE on short-text conversation. A major advantage of our model is that we can exploit the semantic distance between the latent variables to maintain good diversity between the sampled latent variables. Accordingly, we propose a two-stage sampling approach to enable efficient diverse variable selection from a large latent space assumed in the short-text conversation task. Experimental results indicate that our model outperforms various kinds of generation models under both automatic and human evaluations and generates more diverse and informative responses.

pdf bib
Negative Focus Detection via Contextual Attention Mechanism
Longxiang Shen | Bowei Zou | Yu Hong | Guodong Zhou | Qiaoming Zhu | AiTi Aw
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Negation is a universal but complicated linguistic phenomenon, which has received considerable attention from the NLP community over the last decade, since a negated statement often carries both an explicit negative focus and implicit positive meanings. For the sake of understanding a negated statement, it is critical to precisely detect the negative focus in context. However, how to capture contextual information for negative focus detection is still an open challenge. To well address this, we come up with an attention-based neural network to model contextual information. In particular, we introduce a framework which consists of a Bidirectional Long Short-Term Memory (BiLSTM) neural network and a Conditional Random Fields (CRF) layer to effectively encode the order information and the long-range context dependency in a sentence. Moreover, we design two types of attention mechanisms, word-level contextual attention and topic-level contextual attention, to take advantage of contextual information across sentences from both the word perspective and the topic perspective, respectively. Experimental results on the SEM’12 shared task corpus show that our approach achieves the best performance on negative focus detection, yielding an absolute improvement of 2.11% over the state-of-the-art. This demonstrates the great effectiveness of the two types of contextual attention mechanisms.

pdf bib
Modeling Graph Structure in Transformer for Better AMR-to-Text Generation
Jie Zhu | Junhui Li | Muhua Zhu | Longhua Qian | Min Zhang | Guodong Zhou
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Recent studies on AMR-to-text generation often formalize the task as a sequence-to-sequence (seq2seq) learning problem by converting an Abstract Meaning Representation (AMR) graph into a word sequences. Graph structures are further modeled into the seq2seq framework in order to utilize the structural information in the AMR graphs. However, previous approaches only consider the relations between directly connected concepts while ignoring the rich structure in AMR graphs. In this paper we eliminate such a strong limitation and propose a novel structure-aware self-attention approach to better model the relations between indirectly connected concepts in the state-of-the-art seq2seq model, i.e. the Transformer. In particular, a few different methods are explored to learn structural representations between two concepts. Experimental results on English AMR benchmark datasets show that our approach significantly outperforms the state-of-the-art with 29.66 and 31.82 BLEU scores on LDC2015E86 and LDC2017T10, respectively. To the best of our knowledge, these are the best results achieved so far by supervised models on the benchmarks.

pdf bib
Emotion Detection with Neural Personal Discrimination
Xiabing Zhou | Zhongqing Wang | Shoushan Li | Guodong Zhou | Min Zhang
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

There have been a recent line of works to automatically predict the emotions of posts in social media. Existing approaches consider the posts individually and predict their emotions independently. Different from previous researches, we explore the dependence among relevant posts via the authors’ backgrounds, since the authors with similar backgrounds, e.g., gender, location, tend to express similar emotions. However, such personal attributes are not easy to obtain in most social media websites, and it is hard to capture attributes-aware words to connect similar people. Accordingly, we propose a Neural Personal Discrimination (NPD) approach to address above challenges by determining personal attributes from posts, and connecting relevant posts with similar attributes to jointly learn their emotions. In particular, we employ adversarial discriminators to determine the personal attributes, with attention mechanisms to aggregate attributes-aware words. In this way, social correlationship among different posts can be better addressed. Experimental results show the usefulness of personal attributes, and the effectiveness of our proposed NPD approach in capturing such personal attributes with significant gains over the state-of-the-art models.

pdf bib
Human-Like Decision Making: Document-level Aspect Sentiment Classification via Hierarchical Reinforcement Learning
Jingjing Wang | Changlong Sun | Shoushan Li | Jiancheng Wang | Luo Si | Min Zhang | Xiaozhong Liu | Guodong Zhou
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Recently, neural networks have shown promising results on Document-level Aspect Sentiment Classification (DASC). However, these approaches often offer little transparency w.r.t. their inner working mechanisms and lack interpretability. In this paper, to simulating the steps of analyzing aspect sentiment in a document by human beings, we propose a new Hierarchical Reinforcement Learning (HRL) approach to DASC. This approach incorporates clause selection and word selection strategies to tackle the data noise problem in the task of DASC. First, a high-level policy is proposed to select aspect-relevant clauses and discard noisy clauses. Then, a low-level policy is proposed to select sentiment-relevant words and discard noisy words inside the selected clauses. Finally, a sentiment rating predictor is designed to provide reward signals to guide both clause and word selection. Experimental results demonstrate the impressive effectiveness of the proposed approach to DASC over the state-of-the-art baselines.

2018

pdf bib
Self-regulation: Employing a Generative Adversarial Network to Improve Event Detection
Yu Hong | Wenxuan Zhou | Jingli Zhang | Guodong Zhou | Qiaoming Zhu
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Due to the ability of encoding and mapping semantic information into a high-dimensional latent feature space, neural networks have been successfully used for detecting events to a certain extent. However, such a feature space can be easily contaminated by spurious features inherent in event detection. In this paper, we propose a self-regulated learning approach by utilizing a generative adversarial network to generate spurious features. On the basis, we employ a recurrent network to eliminate the fakes. Detailed experiments on the ACE 2005 and TAC-KBP 2015 corpora show that our proposed method is highly effective and adaptable.

pdf bib
Incorporating Image Matching Into Knowledge Acquisition for Event-Oriented Relation Recognition
Yu Hong | Yang Xu | Huibin Ruan | Bowei Zou | Jianmin Yao | Guodong Zhou
Proceedings of the 27th International Conference on Computational Linguistics

Event relation recognition is a challenging language processing task. It is required to determine the relation class of a pair of query events, such as causality, under the condition that there isn’t any reliable clue for use. We follow the traditional statistical approach in this paper, speculating the relation class of the target events based on the relation-class distributions on the similar events. There is minimal supervision used during the speculation process. In particular, we incorporate image processing into the acquisition of similar event instances, including the utilization of images for visually representing event scenes, and the use of the neural network based image matching for approximate calculation between events. We test our method on the ACE-R2 corpus and compared our model with the fully-supervised neural network models. Experimental results show that we achieve a comparable performance to CNN while slightly better than LSTM.

pdf bib
Adversarial Feature Adaptation for Cross-lingual Relation Classification
Bowei Zou | Zengzhuang Xu | Yu Hong | Guodong Zhou
Proceedings of the 27th International Conference on Computational Linguistics

Relation Classification aims to classify the semantic relationship between two marked entities in a given sentence. It plays a vital role in a variety of natural language processing applications. Most existing methods focus on exploiting mono-lingual data, e.g., in English, due to the lack of annotated data in other languages. In this paper, we come up with a feature adaptation approach for cross-lingual relation classification, which employs a generative adversarial network (GAN) to transfer feature representations from one language with rich annotated data to another language with scarce annotated data. Such a feature adaptation approach enables feature imitation via the competition between a relation classification network and a rival discriminator. Experimental results on the ACE 2005 multilingual training corpus, treating English as the source language and Chinese the target, demonstrate the effectiveness of our proposed approach, yielding an improvement of 5.7% over the state-of-the-art.

pdf bib
Employing Text Matching Network to Recognise Nuclearity in Chinese Discourse
Sheng Xu | Peifeng Li | Guodong Zhou | Qiaoming Zhu
Proceedings of the 27th International Conference on Computational Linguistics

The task of nuclearity recognition in Chinese discourse remains challenging due to the demand for more deep semantic information. In this paper, we propose a novel text matching network (TMN) that encodes the discourse units and the paragraphs by combining Bi-LSTM and CNN to capture both global dependency information and local n-gram information. Moreover, it introduces three components of text matching, the Cosine, Bilinear and Single Layer Network, to incorporate various similarities and interactions among the discourse units. Experimental results on the Chinese Discourse TreeBank show that our proposed TMN model significantly outperforms various strong baselines in both micro-F1 and macro-F1.

pdf bib
Joint Modeling of Structure Identification and Nuclearity Recognition in Macro Chinese Discourse Treebank
Xiaomin Chu | Feng Jiang | Yi Zhou | Guodong Zhou | Qiaoming Zhu
Proceedings of the 27th International Conference on Computational Linguistics

Discourse parsing is a challenging task and plays a critical role in discourse analysis. This paper focus on the macro level discourse structure analysis, which has been less studied in the previous researches. We explore a macro discourse structure presentation schema to present the macro level discourse structure, and propose a corresponding corpus, named Macro Chinese Discourse Treebank. On these bases, we concentrate on two tasks of macro discourse structure analysis, including structure identification and nuclearity recognition. In order to reduce the error transmission between the associated tasks, we adopt a joint model of the two tasks, and an Integer Linear Programming approach is proposed to achieve global optimization with various kinds of constraints.

pdf bib
Modeling Coherence for Neural Machine Translation with Dynamic and Topic Caches
Shaohui Kuang | Deyi Xiong | Weihua Luo | Guodong Zhou
Proceedings of the 27th International Conference on Computational Linguistics

Sentences in a well-formed text are connected to each other via various links to form the cohesive structure of the text. Current neural machine translation (NMT) systems translate a text in a conventional sentence-by-sentence fashion, ignoring such cross-sentence links and dependencies. This may lead to generate an incoherent target text for a coherent source text. In order to handle this issue, we propose a cache-based approach to modeling coherence for neural machine translation by capturing contextual information either from recently translated sentences or the entire document. Particularly, we explore two types of caches: a dynamic cache, which stores words from the best translation hypotheses of preceding sentences, and a topic cache, which maintains a set of target-side topical words that are semantically related to the document to be translated. On this basis, we build a new layer to score target words in these two caches with a cache-based neural model. Here the estimated probabilities from the cache-based neural model are combined with NMT probabilities into the final word prediction probabilities via a gating mechanism. Finally, the proposed cache-based neural model is trained jointly with NMT system in an end-to-end manner. Experiments and analysis presented in this paper demonstrate that the proposed cache-based model achieves substantial improvements over several state-of-the-art SMT and NMT baselines.

pdf bib
Cross-media User Profiling with Joint Textual and Social User Embedding
Jingjing Wang | Shoushan Li | Mingqi Jiang | Hanqian Wu | Guodong Zhou
Proceedings of the 27th International Conference on Computational Linguistics

In realistic scenarios, a user profiling model (e.g., gender classification or age regression) learned from one social media might perform rather poorly when tested on another social media due to the different data distributions in the two media. In this paper, we address cross-media user profiling by bridging the knowledge between the source and target media with a uniform user embedding learning approach. In our approach, we first construct a cross-media user-word network to capture the relationship among users through the textual information and a modified cross-media user-user network to capture the relationship among users through the social information. Then, we learn user embedding by jointly learning the heterogeneous network composed of above two networks. Finally, we train a classification (or regression) model with the obtained user embeddings as input to perform user profiling. Empirical studies demonstrate the effectiveness of the proposed approach to two cross-media user profiling tasks, i.e., cross-media gender classification and cross-media age regression.

pdf bib
Stance Detection with Hierarchical Attention Network
Qingying Sun | Zhongqing Wang | Qiaoming Zhu | Guodong Zhou
Proceedings of the 27th International Conference on Computational Linguistics

Stance detection aims to assign a stance label (for or against) to a post toward a specific target. Recently, there is a growing interest in using neural models to detect stance of documents. Most of these works model the sequence of words to learn document representation. However, much linguistic information, such as polarity and arguments of the document, is correlated with the stance of the document, and can inspire us to explore the stance. Hence, we present a neural model to fully employ various linguistic information to construct the document representation. In addition, since the influences of different linguistic information are different, we propose a hierarchical attention network to weigh the importance of various linguistic information, and learn the mutual attention between the document and the linguistic information. The experimental results on two datasets demonstrate the effectiveness of the proposed hierarchical attention neural model.

pdf bib
One vs. Many QA Matching with both Word-level and Sentence-level Attention Network
Lu Wang | Shoushan Li | Changlong Sun | Luo Si | Xiaozhong Liu | Min Zhang | Guodong Zhou
Proceedings of the 27th International Conference on Computational Linguistics

Question-Answer (QA) matching is a fundamental task in the Natural Language Processing community. In this paper, we first build a novel QA matching corpus with informal text which is collected from a product reviewing website. Then, we propose a novel QA matching approach, namely One vs. Many Matching, which aims to address the novel scenario where one question sentence often has an answer with multiple sentences. Furthermore, we improve our matching approach by employing both word-level and sentence-level attentions for solving the noisy problem in the informal text. Empirical studies demonstrate the effectiveness of the proposed approach to question-answer matching.

pdf bib
MCDTB: A Macro-level Chinese Discourse TreeBank
Feng Jiang | Sheng Xu | Xiaomin Chu | Peifeng Li | Qiaoming Zhu | Guodong Zhou
Proceedings of the 27th International Conference on Computational Linguistics

In view of the differences between the annotations of micro and macro discourse rela-tionships, this paper describes the relevant experiments on the construction of the Macro Chinese Discourse Treebank (MCDTB), a higher-level Chinese discourse corpus. Fol-lowing RST (Rhetorical Structure Theory), we annotate the macro discourse information, including discourse structure, nuclearity and relationship, and the additional discourse information, including topic sentences, lead and abstract, to make the macro discourse annotation more objective and accurate. Finally, we annotated 720 articles with a Kappa value greater than 0.6. Preliminary experiments on this corpus verify the computability of MCDTB.

pdf bib
Using active learning to expand training data for implicit discourse relation recognition
Yang Xu | Yu Hong | Huibin Ruan | Jianmin Yao | Min Zhang | Guodong Zhou
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

We tackle discourse-level relation recognition, a problem of determining semantic relations between text spans. Implicit relation recognition is challenging due to the lack of explicit relational clues. The increasingly popular neural network techniques have been proven effective for semantic encoding, whereby widely employed to boost semantic relation discrimination. However, learning to predict semantic relations at a deep level heavily relies on a great deal of training data, but the scale of the publicly available data in this field is limited. In this paper, we follow Rutherford and Xue (2015) to expand the training data set using the corpus of explicitly-related arguments, by arbitrarily dropping the overtly presented discourse connectives. On the basis, we carry out an experiment of sampling, in which a simple active learning approach is used, so as to take the informative instances for data expansion. The goal is to verify whether the selective use of external data not only reduces the time consumption of retraining but also ensures a better system performance. Using the expanded training data, we retrain a convolutional neural network (CNN) based classifer which is a simplified version of Qin et al. (2016)’s stacking gated relation recognizer. Experimental results show that expanding the training set with small-scale carefully-selected external data yields substantial performance gain, with the improvements of about 4% for accuracy and 3.6% for F-score. This allows a weak classifier to achieve a comparable performance against the state-of-the-art systems.

pdf bib
Sentiment Classification towards Question-Answering with Hierarchical Matching Network
Chenlin Shen | Changlong Sun | Jingjing Wang | Yangyang Kang | Shoushan Li | Xiaozhong Liu | Luo Si | Min Zhang | Guodong Zhou
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

In an e-commerce environment, user-oriented question-answering (QA) text pair could carry rich sentiment information. In this study, we propose a novel task/method to address QA sentiment analysis. In particular, we create a high-quality annotated corpus with specially-designed annotation guidelines for QA-style sentiment classification. On the basis, we propose a three-stage hierarchical matching network to explore deep sentiment information in a QA text pair. First, we segment both the question and answer text into sentences and construct a number of [Q-sentence, A-sentence] units in each QA text pair. Then, by leveraging a QA bidirectional matching layer, the proposed approach can learn the matching vectors of each [Q-sentence, A-sentence] unit. Finally, we characterize the importance of the generated matching vectors via a self-matching attention layer. Experimental results, comparing with a number of state-of-the-art baselines, demonstrate the impressive effectiveness of the proposed approach for QA-style sentiment classification.

2017

pdf bib
Modeling Source Syntax for Neural Machine Translation
Junhui Li | Deyi Xiong | Zhaopeng Tu | Muhua Zhu | Min Zhang | Guodong Zhou
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Even though a linguistics-free sequence to sequence model in neural machine translation (NMT) has certain capability of implicitly learning syntactic information of source sentences, this paper shows that source syntax can be explicitly incorporated into NMT effectively to provide further improvements. Specifically, we linearize parse trees of source sentences to obtain structural label sequences. On the basis, we propose three different sorts of encoders to incorporate source syntax into NMT: 1) Parallel RNN encoder that learns word and label annotation vectors parallelly; 2) Hierarchical RNN encoder that learns word and label annotation vectors in a two-level hierarchy; and 3) Mixed RNN encoder that stitchingly learns word and label annotation vectors over sequences where words and labels are mixed. Experimentation on Chinese-to-English translation demonstrates that all the three proposed syntactic encoders are able to improve translation accuracy. It is interesting to note that the simplest RNN encoder, i.e., Mixed RNN encoder yields the best performance with an significant improvement of 1.4 BLEU points. Moreover, an in-depth analysis from several perspectives is provided to reveal how source syntax benefits NMT.

2016

pdf bib
Speculation and Negation Scope Detection via Convolutional Neural Networks
Zhong Qian | Peifeng Li | Qiaoming Zhu | Guodong Zhou | Zhunchen Luo | Wei Luo
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf bib
Global Inference to Chinese Temporal Relation Extraction
Peifeng Li | Qiaoming Zhu | Guodong Zhou | Hongling Wang
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Previous studies on temporal relation extraction focus on mining sentence-level information or enforcing coherence on different temporal relation types among various event mentions in the same sentence or neighboring sentences, largely ignoring those discourse-level temporal relations in nonadjacent sentences. In this paper, we propose a discourse-level global inference model to mine those temporal relations between event mentions in document-level, especially in nonadjacent sentences. Moreover, we provide various kinds of discourse-level constraints, which derived from event semantics, to further improve our global inference model. Evaluation on a Chinese corpus justifies the effectiveness of our discourse-level global inference model over two strong baselines.

pdf bib
A Bilingual Attention Network for Code-switched Emotion Prediction
Zhongqing Wang | Yue Zhang | Sophia Lee | Shoushan Li | Guodong Zhou
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Emotions in code-switching text can be expressed in either monolingual or bilingual forms. However, relatively little research has emphasized on code-switching text. In this paper, we propose a Bilingual Attention Network (BAN) model to aggregate the monolingual and bilingual informative words to form vectors from the document representation, and integrate the attention vectors to predict the emotion. The experiments show that the effectiveness of the proposed model. Visualization of the attention layers illustrates that the model selects qualitatively informative words.

pdf bib
Semi-supervised Gender Classification with Joint Textual and Social Modeling
Shoushan Li | Bin Dai | Zhengxian Gong | Guodong Zhou
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

In gender classification, labeled data is often limited while unlabeled data is ample. This motivates semi-supervised learning for gender classification to improve the performance by exploring the knowledge in both labeled and unlabeled data. In this paper, we propose a semi-supervised approach to gender classification by leveraging textual features and a specific kind of indirect links among the users which we call “same-interest” links. Specifically, we propose a factor graph, namely Textual and Social Factor Graph (TSFG), to model both the textual and the “same-interest” link information. Empirical studies demonstrate the effectiveness of the proposed approach to semi-supervised gender classification.

pdf bib
User Classification with Multiple Textual Perspectives
Dong Zhang | Shoushan Li | Hongling Wang | Guodong Zhou
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Textual information is of critical importance for automatic user classification in social media. However, most previous studies model textual features in a single perspective while the text in a user homepage typically possesses different styles of text, such as original message and comment from others. In this paper, we propose a novel approach, namely ensemble LSTM, to user classification by incorporating multiple textual perspectives. Specifically, our approach first learns a LSTM representation with a LSTM recurrent neural network and then presents a joint learning method to integrating all naturally-divided textual perspectives. Empirical studies on two basic user classification tasks, i.e., gender classification and age classification, demonstrate the effectiveness of the proposed approach to user classification with multiple textual perspectives.

pdf bib
Two-View Label Propagation to Semi-supervised Reader Emotion Classification
Shoushan Li | Jian Xu | Dong Zhang | Guodong Zhou
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

In the literature, various supervised learning approaches have been adopted to address the task of reader emotion classification. However, the classification performance greatly suffers when the size of the labeled data is limited. In this paper, we propose a two-view label propagation approach to semi-supervised reader emotion classification by exploiting two views, namely source text and response text in a label propagation algorithm. Specifically, our approach depends on two word-document bipartite graphs to model the relationship among the samples in the two views respectively. Besides, the two bipartite graphs are integrated by linking each source text sample with its corresponding response text sample via a length-sensitive transition probability. In this way, our two-view label propagation approach to semi-supervised reader emotion classification largely alleviates the reliance on the strong sufficiency and independence assumptions of the two views, as required in co-training. Empirical evaluation demonstrates the effectiveness of our two-view label propagation approach to semi-supervised reader emotion classification.

pdf bib
Corpus Fusion for Emotion Classification
Suyang Zhu | Shoushan Li | Ying Chen | Guodong Zhou
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Machine learning-based methods have obtained great progress on emotion classification. However, in most previous studies, the models are learned based on a single corpus which often suffers from insufficient labeled data. In this paper, we propose a corpus fusion approach to address emotion classification across two corpora which use different emotion taxonomies. The objective of this approach is to utilize the annotated data from one corpus to help the emotion classification on another corpus. An Integer Linear Programming (ILP) optimization is proposed to refine the classification results. Empirical studies show the effectiveness of the proposed approach to corpus fusion for emotion classification.

pdf bib
SoNLP-DP System for ConLL-2016 English Shallow Discourse Parsing
Fang Kong | Sheng Li | Junhui Li | Muhua Zhu | Guodong Zhou
Proceedings of the CoNLL-16 shared task

pdf bib
SoNLP-DP System for ConLL-2016 Chinese Shallow Discourse Parsing
Junhui Li | Fang Kong | Sheng Li | Muhua Zhu | Guodong Zhou
Proceedings of the CoNLL-16 shared task

2015

pdf bib
The SoNLP-DP System in the CoNLL-2015 shared Task
Fang Kong | Sheng Li | Guodong Zhou
Proceedings of the Nineteenth Conference on Computational Natural Language Learning - Shared Task

pdf bib
Improving Semantic Parsing with Enriched Synchronous Context-Free Grammar
Junhui Li | Muhua Zhu | Wei Lu | Guodong Zhou
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf bib
Unsupervised Negation Focus Identification with Word-Topic Graph Model
Bowei Zou | Guodong Zhou | Qiaoming Zhu
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf bib
Document-Level Machine Translation Evaluation with Gist Consistency and Text Cohesion
Zhengxian Gong | Min Zhang | Guodong Zhou
Proceedings of the Second Workshop on Discourse in Machine Translation

pdf bib
Negation and Speculation Identification in Chinese Language
Bowei Zou | Qiaoming Zhu | Guodong Zhou
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

pdf bib
Sentence-level Emotion Classification with Label and Context Dependence
Shoushan Li | Lei Huang | Rong Wang | Guodong Zhou
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

pdf bib
Semi-Stacking for Semi-supervised Sentiment Classification
Shoushan Li | Lei Huang | Jingjing Wang | Guodong Zhou
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

pdf bib
Emotion Detection in Code-switching Texts via Bilingual and Sentimental Information
Zhongqing Wang | Sophia Lee | Shoushan Li | Guodong Zhou
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

2014

pdf bib
Research on Chinese discourse rhetorical structure representation scheme and corpus annotation
Guodong Zhou
Proceedings of the Third CIPS-SIGHAN Joint Conference on Chinese Language Processing

pdf bib
A Constituent-Based Approach to Argument Labeling with Joint Inference in Discourse Parsing
Fang Kong | Hwee Tou Ng | Guodong Zhou
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

pdf bib
Building Chinese Discourse Corpus with Connective-driven Dependency Tree Structure
Yancui Li | Wenhe Feng | Jing Sun | Fang Kong | Guodong Zhou
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

pdf bib
Negation Focus Identification with Contextual Discourse Information
Bowei Zou | Guodong Zhou | Qiaoming Zhu
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Bilingual Active Learning for Relation Classification via Pseudo Parallel Corpora
Longhua Qian | Haotian Hui | Ya’nan Hu | Guodong Zhou | Qiaoming Zhu
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Bilingual Event Extraction: a Case Study on Trigger Type Determination
Zhu Zhu | Shoushan Li | Guodong Zhou | Rui Xia
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf bib
Skill Inference with Personal and Skill Connections
Zhongqing Wang | Shoushan Li | Hanxiao Shi | Guodong Zhou
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

pdf bib
Employing Event Inference to Improve Semi-Supervised Chinese Event Extraction
Peifeng Li | Qiaoming Zhu | Guodong Zhou
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

2013

pdf bib
Collective Personal Profile Summarization with Social Networks
Zhongqing Wang | Shoushan Li | Fang Kong | Guodong Zhou
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf bib
Tree Kernel-based Negation and Speculation Scope Detection with Structured Syntactic Parse Features
Bowei Zou | Guodong Zhou | Qiaoming Zhu
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf bib
Argument Inference from Relevant Event Mentions in Chinese Argument Extraction
Peifeng Li | Qiaoming Zhu | Guodong Zhou
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Joint Modeling of News Reader’s and Comment Writer’s Emotions
Huanhuan Liu | Shoushan Li | Guodong Zhou | Chu-Ren Huang | Peifeng Li
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2012

pdf bib
Head-Driven Hierarchical Phrase-based Translation
Junhui Li | Zhaopeng Tu | Guodong Zhou | Josef van Genabith
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf bib
Using Syntactic Head Information in Hierarchical Phrase-Based Translation
Junhui Li | Zhaopeng Tu | Guodong Zhou | Josef van Genabith
Proceedings of the Seventh Workshop on Statistical Machine Translation

pdf bib
Linguistic foundation for NLP
Guodong Zhou
Proceedings of the Second CIPS-SIGHAN Joint Conference on Chinese Language Processing

pdf bib
Exploring Local and Global Semantic Information for Event Pronoun Resolution
Fang Kong | Guodong Zhou
Proceedings of COLING 2012

pdf bib
Employing Morphological Structures and Sememes for Chinese Event Extraction
Peifeng Li | Guodong Zhou
Proceedings of COLING 2012

pdf bib
Joint Modeling of Trigger Identification and Event Type Determination in Chinese Event Extraction
Peifeng Li | Qiaoming Zhu | Hongjun Diao | Guodong Zhou
Proceedings of COLING 2012

pdf bib
Bilingual Lexicon Construction from Comparable Corpora via Dependency Mapping
Longhua Qian | Hongling Wang | Guodong Zhou | Qiaoming Zhu
Proceedings of COLING 2012

pdf bib
Classifier-Based Tense Model for SMT
ZhengXian Gong | Min Zhang | ChewLim Tan | GuoDong Zhou
Proceedings of COLING 2012: Posters

pdf bib
Phrase-Based Evaluation for Machine Translation
Liangyou Li | Zhengxian Gong | Guodong Zhou
Proceedings of COLING 2012: Posters

pdf bib
Active Learning for Chinese Word Segmentation
Shoushan Li | Guodong Zhou | Chu-Ren Huang
Proceedings of COLING 2012: Posters

pdf bib
A Unified Framework for Discourse Argument Identification via Shallow Semantic Parsing
Fan Xu | Qiaoming Zhu | Guodong Zhou
Proceedings of COLING 2012: Posters

pdf bib
Active Learning for Imbalanced Sentiment Classification
Shoushan Li | Shengfeng Ju | Guodong Zhou | Xiaojun Li
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

pdf bib
N-gram-based Tense Models for Statistical Machine Translation
Zhengxian Gong | Min Zhang | Chew Lim Tan | Guodong Zhou
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

pdf bib
Employing Compositional Semantics and Discourse Consistency in Chinese Event Extraction
Peifeng Li | Guodong Zhou | Qiaoming Zhu | Libin Hou
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

pdf bib
Unified Dependency Parsing of Chinese Morphological and Syntactic Structures
Zhongguo Li | Guodong Zhou
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

2011

pdf bib
Using Cross-Entity Inference to Improve Event Extraction
Yu Hong | Jianfeng Zhang | Bin Ma | Jianmin Yao | Guodong Zhou | Qiaoming Zhu
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Combining Dependency and Constituent-based Syntactic Information for Anaphoricity Determination in Coreference Resolution
Fang Kong | Guodong Zhou
Proceedings of the 25th Pacific Asia Conference on Language, Information and Computation

pdf bib
Dependency-directed Tree Kernel-based Protein-Protein Interaction Extraction from Biomedical Literature
Longhua Qian | Guodong Zhou
Proceedings of 5th International Joint Conference on Natural Language Processing

pdf bib
Transductive Minimum Error Rate Training for Statistical Machine Translation
Yinggong Zhao | Shujie Liu | Yangsheng Ji | Jiajun Chen | Guodong Zhou
Proceedings of 5th International Joint Conference on Natural Language Processing

pdf bib
Improve SMT with Source-Side “Topic-Document” Distributions
Zhengxian Gong | Guodong Zhou | Liangyou Li
Proceedings of Machine Translation Summit XIII: Papers

pdf bib
Cache-based Document-level Statistical Machine Translation
Zhengxian Gong | Min Zhang | Guodong Zhou
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

2010

pdf bib
Employing Personal/Impersonal Views in Supervised and Semi-Supervised Sentiment Classification
Shoushan Li | Chu-Ren Huang | Guodong Zhou | Sophia Yat Mei Lee
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

pdf bib
Joint Syntactic and Semantic Parsing of Chinese
Junhui Li | Guodong Zhou | Hwee Tou Ng
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

pdf bib
Clustering-Based Stratified Seed Sampling for Semi-Supervised Relation Classification
Longhua Qian | Guodong Zhou
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf bib
A Unified Framework for Scope Learning via Simplified Shallow Semantic Parsing
Qiaoming Zhu | Junhui Li | Hongling Wang | Guodong Zhou
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf bib
A Tree Kernel-Based Unified Framework for Chinese Zero Anaphora Resolution
Fang Kong | Guodong Zhou
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf bib
Hedge Detection and Scope Finding by Sequence Labeling with Procedural Feature Selection
Shaodian Zhang | Hai Zhao | Guodong Zhou | Bao-Liang Lu
Proceedings of the Fourteenth Conference on Computational Natural Language Learning – Shared Task

pdf bib
Soochow University: Description and Analysis of the Chinese Word Sense Induction System for CLP2010
Hua Xu | Bing Liu | Longhua Qian | Guodong Zhou
CIPS-SIGHAN Joint Conference on Chinese Language Processing

pdf bib
Dependency-driven Anaphoricity Determination for Coreference Resolution
Fang Kong | Guodong Zhou | Longhua Qian | Qiaoming Zhu
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

pdf bib
Sentiment Classification and Polarity Shifting
Shoushan Li | Sophia Y. M. Lee | Ying Chen | Chu-Ren Huang | Guodong Zhou
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

pdf bib
Learning the Scope of Negation via Shallow Semantic Parsing
Junhui Li | Guodong Zhou | Hongling Wang | Qiaoming Zhu
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

pdf bib
Dependency-Driven Feature-based Learning for Extracting Protein-Protein Interactions from Biomedical Text
Bing Liu | Longhua Qian | Hongling Wang | Guodong Zhou
Coling 2010: Posters

2009

pdf bib
Global Learning of Noun Phrase Anaphoricity in Coreference Resolution via Label Propagation
GuoDong Zhou | Fang Kong
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf bib
Employing the Centering Theory in Pronoun Resolution from the Semantic Perspective
Fang Kong | GuoDong Zhou | Qiaoming Zhu
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf bib
Improving Nominal SRL in Chinese Language with Verbal SRL Information and Automatic Predicate Recognition
Junhui Li | Guodong Zhou | Hai Zhao | Qiaoming Zhu | Peide Qian
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf bib
Semi-Supervised Learning for Semantic Relation Classification using Stratified Sampling Strategy
Longhua Qian | Guodong Zhou | Fang Kong | Qiaoming Zhu
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf bib
Multilingual Dependency Learning: A Huge Feature Engineering Method to Semantic Dependency Parsing
Hai Zhao | Wenliang Chen | Chunyu Kit | Guodong Zhou
Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task

pdf bib
Cross Language Dependency Parsing using a Bilingual Lexicon
Hai Zhao | Yan Song | Chunyu Kit | Guodong Zhou
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP

2008

pdf bib
Dependency Tree-based SRL with Proper Pruning and Extensive Feature Engineering
Hongling Wang | Honglin Wang | Guodong Zhou | Qiaoming Zhu
CoNLL 2008: Proceedings of the Twelfth Conference on Computational Natural Language Learning

pdf bib
Context-Sensitive Convolution Tree Kernel for Pronoun Resolution
GuoDong Zhou | Fang Kong | QiaoMing Zhu
Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-I

pdf bib
Semi-Supervised Learning for Relation Extraction
GuoDong Zhou | JunHui Li | LongHua Qian | QiaoMing Zhu
Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-I

pdf bib
Exploiting Constituent Dependencies for Tree Kernel-Based Semantic Relation Extraction
Longhua Qian | Guodong Zhou | Fang Kong | Qiaoming Zhu | Peide Qian
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)

2007

pdf bib
Relation Extraction Using Convolution Tree Kernel Expanded with Entity Features
Longhua Qian | Guodong Zhou | Qiaomin Zhu | Peide Qian
Proceedings of the 21st Pacific Asia Conference on Language, Information and Computation

pdf bib
A Grammar-driven Convolution Tree Kernel for Semantic Role Classification
Min Zhang | Wanxiang Che | Aiti Aw | Chew Lim Tan | Guodong Zhou | Ting Liu | Sheng Li
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

pdf bib
Tree Kernel-Based Relation Extraction with Context-Sensitive Structured Parse Tree Information
GuoDong Zhou | Min Zhang | Dong Hong Ji | QiaoMing Zhu
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)

2006

pdf bib
Modeling Commonality among Related Classes in Relation Extraction
GuoDong Zhou | Jian Su | Min Zhang
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics

pdf bib
A Composite Kernel to Extract Relations between Entities with Both Flat and Structured Features
Min Zhang | Jie Zhang | Jian Su | GuoDong Zhou
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics

pdf bib
Chinese Word Segmentation and Named Entity Recognition Based on a Context-Dependent Mutual Information Independence Model
Min Zhang | GuoDong Zhou | LingPeng Yang | DongHong Ji
Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing

2005

pdf bib
Discovering Relations Between Named Entities from a Large Raw Corpus Using Tree Similarity-Based Clustering
Min Zhang | Jian Su | Danmei Wang | Guodong Zhou | Chew Lim Tan
Second International Joint Conference on Natural Language Processing: Full Papers

pdf bib
A Chunking Strategy Towards Unknown Word Detection in Chinese Word Segmentation
GuoDong Zhou
Second International Joint Conference on Natural Language Processing: Full Papers

pdf bib
Exploring Various Knowledge in Relation Extraction
GuoDong Zhou | Jian Su | Jie Zhang | Min Zhang
Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)

2004

pdf bib
Improving Pronoun Resolution by Incorporating Coreferential Information of Candidates
Xiaofeng Yang | Jian Su | Guodong Zhou | Chew-Lim Tan
Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)

pdf bib
Multi-Criteria-based Active Learning for Named Entity Recognition
Dan Shen | Jie Zhang | Jian Su | Guodong Zhou | Chew-Lim Tan
Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)

pdf bib
Recognizing Names in Biomedical Texts using Hidden Markov Model and SVM plus Sigmoid
GuoDong Zhou
Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP)

pdf bib
Exploring Deep Knowledge Resources in Biomedical Name Recognition
GuoDong Zhou | Jian Su
Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP)

pdf bib
Discriminative Hidden Markov Modeling with Long State Dependence using a kNN Ensemble
GuoDong Zhou
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

pdf bib
Modeling of Long Distance Context Dependency
GuoDong Zhou
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

pdf bib
An NP-Cluster Based Approach to Coreference Resolution
Xiaofeng Yang | Jian Su | GuoDong Zhou | Chew Lim Tan
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

pdf bib
A High-Performance Coreference Resolution System using a Constraint-based Multi-Agent Strategy
GuoDong Zhou | Jian Su
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

2003

pdf bib
Coreference Resolution Using Competition Learning Approach
Xiaofeng Yang | Guodong Zhou | Jian Su | Chew Lim Tan
Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics

pdf bib
Effective Adaptation of Hidden Markov Model-based Named Entity Recognizer for Biomedical Domain
Dan Shen | Jie Zhang | Guodong Zhou | Jian Su | Chew-Lim Tan
Proceedings of the ACL 2003 Workshop on Natural Language Processing in Biomedicine

pdf bib
Modeling of Long Distance Context Dependency in Chinese
GuoDong Zhou
Proceedings of the Second SIGHAN Workshop on Chinese Language Processing

pdf bib
A Chinese Efficient Analyser Integrating Word Segmentation, Part-Of-Speech Tagging, Partial Parsing and Full Parsing
GuoDong Zhou | Jian Su
Proceedings of the Second SIGHAN Workshop on Chinese Language Processing

pdf bib
Chunking-based Chinese Word Tokenization
GuoDong Zhou
Proceedings of the Second SIGHAN Workshop on Chinese Language Processing

2002

pdf bib
Named Entity Recognition using an HMM-based Chunk Tagger
GuoDong Zhou | Jian Su
Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics

2000

pdf bib
Hybrid Text Chunking
GuoDong Zhou | Jian Su | TongGuan Tey
Fourth Conference on Computational Natural Language Learning and the Second Learning Language in Logic Workshop

pdf bib
Error-driven HMM-based Chunk Tagger with Context-dependent Lexicon
GuoDong Zhou | Jian Su
2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora

1998

pdf bib
MI-trigger-based Language Modelling
Guodong Zhou | Kim-Teng Lua
Proceedings of the 12th Pacific Asia Conference on Language, Information and Computation

pdf bib
Word Association and MI-Trigger-based Language Modeling
GuoDong Zhou | KimTeng Lua
36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 2

pdf bib
Word Association and MI-Trigger-based Language Modeling
GuoDong Zhou | KimTeng Lua
COLING 1998 Volume 2: The 17th International Conference on Computational Linguistics

Search
Co-authors