Qiaoyang Luo


2022

pdf bib
aiML at the FinNLP-2022 ERAI Task: Combining Classification and Regression Tasks for Financial Opinion Mining
Zhaoxuan Qin | Jinan Zou | Qiaoyang Luo | Haiyao Cao | Yang Jiao
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)

Identifying posts of high financial quality from opinions is of extraordinary significance for investors. Hence, this paper focuses on evaluating the rationales of amateur investors (ERAI) in a shared task, and we present our solutions. The pairwise comparison task aims at extracting the post that will trigger higher MPP and ML values from pairs of posts. The goal of the unsupervised ranking task is to find the top 10% of posts with higher MPP and ML values. We initially model the shared task as text classification and regression problems. We then propose a multi-learning approach applied by financial domain pre-trained models and multiple linear classifiers for factor combinations to integrate better relationships and information between training data. The official results have proved that our method achieves 48.28% and 52.87% for MPP and ML accuracy on pairwise tasks, 14.02% and -4.17% regarding unsupervised ranking tasks for MPP and ML. Our source code is available.

pdf bib
Adaptive Meta-learner via Gradient Similarity for Few-shot Text Classification
Tianyi Lei | Honghui Hu | Qiaoyang Luo | Dezhong Peng | Xu Wang
Proceedings of the 29th International Conference on Computational Linguistics

Few-shot text classification aims to classify the text under the few-shot scenario. Most of the previous methods adopt optimization-based meta learning to obtain task distribution. However, due to the neglect of matching between the few amount of samples and complicated models, as well as the distinction between useful and useless task features, these methods suffer from the overfitting issue. To address this issue, we propose a novel Adaptive Meta-learner via Gradient Similarity (AMGS) method to improve the model generalization ability to a new task. Specifically, the proposed AMGS alleviates the overfitting based on two aspects: (i) acquiring the potential semantic representation of samples and improving model generalization through the self-supervised auxiliary task in the inner loop, (ii) leveraging the adaptive meta-learner via gradient similarity to add constraints on the gradient obtained by base-learner in the outer loop. Moreover, we make a systematic analysis of the influence of regularization on the entire framework. Experimental results on several benchmarks demonstrate that the proposed AMGS consistently improves few-shot text classification performance compared with the state-of-the-art optimization-based meta-learning approaches. The code is available at: https://github.com/Tianyi-Lei.

2021

pdf bib
Don’t Miss the Labels: Label-semantic Augmented Meta-Learner for Few-Shot Text Classification
Qiaoyang Luo | Lingqiao Liu | Yuhao Lin | Wei Zhang
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021