Xiao Liang


2024

pdf bib
Task Oriented In-Domain Data Augmentation
Xiao Liang | Xinyu Hu | Simiao Zuo | Yeyun Gong | Qiang Lou | Yi Liu | Shao-Lun Huang | Jian Jiao
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Large Language Models (LLMs) have shown superior performance in various applications and fields. To achieve better performance on specialized domains such as law and advertisement, LLMs are often continue pre-trained on in-domain data. However, existing approaches suffer from two major issues. First, in-domain data are scarce compared with general domain-agnostic data. Second, data used for continual pre-training are not task-aware, such that they may not be helpful to downstream applications. We propose TRAIT, a task-oriented in-domain data augmentation framework. Our framework is divided into two parts: in-domain data selection and task-oriented synthetic passage generation. The data selection strategy identifies and selects a large amount of in-domain data from general corpora, and thus significantly enriches domain knowledge in the continual pre-training data. The synthetic passages contain guidance on how to use domain knowledge to answer questions about downstream tasks. By training on such passages, the model aligns with the need of downstream applications. We adapt LLMs to two domains: advertisement and math. On average, TRAIT improves LLM performance by 8% in the advertisement domain and 7.5% in the math domain.

pdf bib
Chunk, Align, Select: A Simple Long-sequence Processing Method for Transformers
Jiawen Xie | Pengyu Cheng | Xiao Liang | Yong Dai | Nan Du
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Although dominant in natural language processing, transformer-based models still struggle with long-sequence processing, due to the computational costs of their self-attention operations, which increase exponentially as the length of the input sequence grows. To address this challenge, we propose a **Sim**ple framework to enhance the long-content processing of off-the-shelf pre-trained transformers via three steps: **C**hunk, **A**lign, and **S**elect (SimCAS). More specifically, we first divide each long-sequence input into a batch of chunks, then align the inter-chunk information during the encoding steps, and finally, select the most representative hidden states from the encoder for the decoding process. With our SimCAS, the computation and memory costs can be reduced to linear complexity. In experiments, we demonstrate the effectiveness of the proposed method on various real-world long-text summarization and reading comprehension tasks, in which SimCAS significantly outperforms prior long-sequence processing baselines. The code is at [https://github.com/xjw-nlp/SimCAS](https://github.com/xjw-nlp/SimCAS).

pdf bib
TMFN: A Target-oriented Multi-grained Fusion Network for End-to-end Aspect-based Multimodal Sentiment Analysis
Di Wang | Yuzheng He | Xiao Liang | Yumin Tian | Shaofeng Li | Lin Zhao
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

End-to-end multimodal aspect-based sentiment analysis (MABSA) combines multimodal aspect terms extraction (MATE) with multimodal aspect sentiment classification (MASC), aiming to simultaneously extract aspect words and classify the sentiment polarity of each aspect. However, existing MABSA methods have overlooked two issues: (i) They only focus on fusing image regional information and textual words for two subtasks of MABSA. Whereas, MATE subtask relies more on global image information to assist in obtaining the quantity and attributes of aspects. Ignoring the integration with global information may affect the performance of MABSA methods. (ii) They fail to take advantage of target information. Nevertheless, the fine-grained details of targets are important for classifying sentiments of aspects. To solve these problems, we propose a Target-oriented Multi-grained Fusion Network(TMFN). It fuses text information with global coarse-grained image information for MATE subtask and with fine-grained image information for MASC subtask. In addition, a target-oriented feature alignment (TOFA) module is designed to enhance target-related information in image features with target details. In such a way, image features will contain more target emotional-related information which is beneficial to sentiment classification. Extensive experiments show that our method outperforms state-of-the-art methods on two benchmark datasets.

2020

pdf bib
面向垂直领域的阅读理解数据增强方法(Method for reading comprehension data enhancement in vertical field)
Zhengwei Lv (吕政伟) | Lei Yang (杨雷) | Zhizhong Shi (石智中) | Xiao Liang (梁霄) | Tao Lei (雷涛) | Duoxing Liu (刘多星)
Proceedings of the 19th Chinese National Conference on Computational Linguistics

阅读理解问答系统是利用语义理解等自然语言处理技术,根据输入问题,对非结构化文档数据进行分析,生成一个答案,具有很高的研究和应用价值。在垂直领域应用过程中,阅读理解问答数据标注成本高且用户问题表达复杂多样,使得阅读理解问答系统准确率低、鲁棒性差。针对这一问题,本文提出一种面向垂直领域的阅读理解问答数据的增强方法,该方法基于真实用户问题,构造阅读理解训练数据,一方面降低标注成本,另一方面增加训练数据多样性,提升模型的准确率和鲁棒性。本文用汽车领域数据对该方法进行实验验证,其结果表明该方法对垂直领域阅读理解模型的准确率和鲁棒性均能有效提升。

2019

pdf bib
AUTOHOME-ORCA at SemEval-2019 Task 8: Application of BERT for Fact-Checking in Community Forums
Zhengwei Lv | Duoxing Liu | Haifeng Sun | Xiao Liang | Tao Lei | Zhizhong Shi | Feng Zhu | Lei Yang
Proceedings of the 13th International Workshop on Semantic Evaluation

Fact checking is an important task for maintaining high quality posts and improving user experience in Community Question Answering forums. Therefore, the SemEval-2019 task 8 is aimed to identify factual question (subtask A) and detect true factual information from corresponding answers (subtask B). In order to address this task, we propose a system based on the BERT model with meta information of questions. For the subtask A, the outputs of fine-tuned BERT classification model are combined with the feature of length of questions to boost the performance. For the subtask B, the predictions of several variants of BERT model encoding the meta information are combined to create an ensemble model. Our system achieved competitive results with an accuracy of 0.82 in the subtask A and 0.83 in the subtask B. The experimental results validate the effectiveness of our system.