He Junyi


2023

pdf bib
Enhancing Implicit Sentiment Learning via the Incorporation of Part-of-Speech for Aspect-based Sentiment Analysis
Wang Junlang | Li Xia | He Junyi | Zheng Yongqiang | Ma Junteng
Proceedings of the 22nd Chinese National Conference on Computational Linguistics

“Implicit sentiment modeling in aspect-based sentiment analysis is a challenging problem due tocomplex expressions and the lack of opinion words in sentences. Recent efforts focusing onimplicit sentiment in ABSA mostly leverage the dependency between aspects and pretrain onextra annotated corpora. We argue that linguistic knowledge can be incorporated into the modelto better learn implicit sentiment knowledge. In this paper, we propose a PLM-based, linguis-tically enhanced framework by incorporating Part-of-Speech (POS) for aspect-based sentimentanalysis. Specifically, we design an input template for PLMs that focuses on both aspect-relatedcontextualized features and POS-based linguistic features. By aligning with the representationsof the tokens and their POS sequences, the introduced knowledge is expected to guide the modelin learning implicit sentiment by capturing sentiment-related information. Moreover, we alsodesign an aspect-specific self-supervised contrastive learning strategy to optimize aspect-basedcontextualized representation construction and assist PLMs in concentrating on target aspects. Experimental results on public benchmarks show that our model can achieve competitive andstate-of-the-art performance without introducing extra annotated corpora.”

2022

pdf bib
Dynamic Negative Example Construction for Grammatical Error Correction using Contrastive Learning
He Junyi | Zhuang Junbin | Li Xia
Proceedings of the 21st Chinese National Conference on Computational Linguistics

“Grammatical error correction (GEC) aims at correcting texts with different types of grammatical errors into natural and correct forms. Due to the difference of error type distribution and error density, current grammatical error correction systems may over-correct writings and produce a low precision. To address this issue, in this paper, we propose a dynamic negative example construction method for grammatical error correction using contrastive learning. The proposed method can construct sufficient negative examples with diverse grammatical errors, and can be dynamically used during model training. The constructed negative examples are beneficial for the GEC model to correct sentences precisely and suppress the model from over-correction. Experimental results show that our proposed method enhances model precision, proving the effectiveness of our method.”