Daming Lu


2022

pdf bib
daminglu123 at SemEval-2022 Task 2: Using BERT and LSTM to Do Text Classification
Daming Lu
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)

Multiword expressions (MWEs) or idiomaticity are common phenomenon in natural languages. Current pre-trained language models cannot effectively capture the meaning of these MWEs. The reason is that two normal words, after combining together, could have an abruptly different meaning than the compositionality of the meanings of each word, whereas pre-trained language models reply on words compositionality. We proposed an improved method of adding an LSTM layer to the BERT model in order to get better results on a text classification task (Subtask A). Our result is slightly better than the baseline. We also tried adding TextCNN to BERT and adding both LSTM and TextCNN to BERT. We find that adding only LSTM gives the best performance.

2020

pdf bib
Masked Reasoner at SemEval-2020 Task 4: Fine-Tuning RoBERTa for Commonsense Reasoning
Daming Lu
Proceedings of the Fourteenth Workshop on Semantic Evaluation

This paper describes the masked reasoner system that participated in SemEval-2020 Task 4: Commonsense Validation and Explanation. The system participated in the subtask B.We proposes a novel method to fine-tune RoBERTa by masking the most important word in the statement. We believe that the confidence of the system in recovering that word is positively correlated to the score the masked language model gives to the current statement-explanation pair. We evaluate the importance of each word using InferSent and do the masked fine-tuning on RoBERTa. Then we use the fine-tuned model to predict the most plausible explanation. Our system is fast in training and achieved 73.5% accuracy.
Search
Co-authors
    Venues