Dingkui Hao
2020
HIT-SCIR at SemEval-2020 Task 5: Training Pre-trained Language Model with Pseudo-labeling Data for Counterfactuals Detection
Xiao Ding
|
Dingkui Hao
|
Yuewei Zhang
|
Kuo Liao
|
Zhongyang Li
|
Bing Qin
|
Ting Liu
Proceedings of the Fourteenth Workshop on Semantic Evaluation
We describe our system for Task 5 of SemEval 2020: Modelling Causal Reasoning in Language: Detecting Counterfactuals. Despite deep learning has achieved significant success in many fields, it still hardly drives today’s AI to strong AI, as it lacks of causation, which is a fundamental concept in human thinking and reasoning. In this task, we dedicate to detecting causation, especially counterfactuals from texts. We explore multiple pre-trained models to learn basic features and then fine-tune models with counterfactual data and pseudo-labeling data. Our team HIT-SCIR wins the first place (1st) in Sub-task 1 — Detecting Counterfactual Statements and is ranked 4th in Sub-task 2 — Detecting Antecedent and Consequence. In this paper we provide a detailed description of the approach, as well as the results obtained in this task.
Search
Co-authors
- Xiao Ding 1
- Yuewei Zhang 1
- Kuo Liao 1
- Zhongyang Li 1
- Bing Qin 1
- show all...
- Ting Liu 1