Fang Ma
2022
XPrompt: Exploring the Extreme of Prompt Tuning
Fang Ma
|
Chen Zhang
|
Lei Ren
|
Jingang Wang
|
Qifan Wang
|
Wei Wu
|
Xiaojun Quan
|
Dawei Song
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Prompt tuning learns soft prompts to condition the frozen Pre-trained Language Models (PLMs) for performing downstream tasks in a parameter-efficient manner. While prompt tuning has gradually reached the performance level of fine-tuning as the model scale increases, there is still a large performance gap between prompt tuning and fine-tuning for models of moderate and small scales (typically less than 11B parameters). In this paper, we empirically show that the trained prompt tokens can have a negative impact on a downstream task and thus degrade its performance. To bridge the gap, we propose a novel Prompt tuning model with an eXtremely small scale (XPrompt) under the regime of lottery tickets hypothesis. Specifically, XPrompt eliminates the negative prompt tokens at different granularity levels through a hierarchical structured pruning, yielding a more parameter-efficient prompt yet with a competitive performance. Comprehensive experiments are carried out on the SuperGLUE tasks, and the results indicate that XPrompt is able to close the performance gap at smaller model scales.
Structural Bias for Aspect Sentiment Triplet Extraction
Chen Zhang
|
Lei Ren
|
Fang Ma
|
Jingang Wang
|
Wei Wu
|
Dawei Song
Proceedings of the 29th International Conference on Computational Linguistics
Structural bias has recently been exploited for aspect sentiment triplet extraction (ASTE) and led to improved performance. On the other hand, it is recognized that explicitly incorporating structural bias would have a negative impact on efficiency, whereas pretrained language models (PLMs) can already capture implicit structures. Thus, a natural question arises: Is structural bias still a necessity in the context of PLMs? To answer the question, we propose to address the efficiency issues by using an adapter to integrate structural bias in the PLM and using a cheap-to-compute relative position structure in place of the syntactic dependency structure. Benchmarking evaluation is conducted on the SemEval datasets. The results show that our proposed structural adapter is beneficial to PLMs and achieves state-of-the-art performance over a range of strong baselines, yet with a light parameter demand and low latency. Meanwhile, we give rise to the concern that the current evaluation default with data of small scale is under-confident. Consequently, we release a large-scale dataset for ASTE. The results on the new dataset hint that the structural adapter is confidently effective and efficient to a large scale. Overall, we draw the conclusion that structural bias shall still be a necessity even with PLMs.
2021
Exploiting Position Bias for Robust Aspect Sentiment Classification
Fang Ma
|
Chen Zhang
|
Dawei Song
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
Search
Co-authors
- Chen Zhang 3
- Dawei Song 3
- Lei Ren 2
- Jingang Wang 2
- Wei Wu 2
- show all...