Man-Chen Hung


2021

pdf bib
Classification of Tweets Self-reporting Adverse Pregnancy Outcomes and Potential COVID-19 Cases Using RoBERTa Transformers
Lung-Hao Lee | Man-Chen Hung | Chien-Huan Lu | Chang-Hao Chen | Po-Lei Lee | Kuo-Kai Shyu
Proceedings of the Sixth Social Media Mining for Health (#SMM4H) Workshop and Shared Task

This study describes our proposed model design for SMM4H 2021 shared tasks. We fine-tune the language model of RoBERTa transformers and their connecting classifier to complete the classification tasks of tweets for adverse pregnancy outcomes (Task 4) and potential COVID-19 cases (Task 5). The evaluation metric is F1-score of the positive class for both tasks. For Task 4, our best score of 0.93 exceeded the mean score of 0.925. For Task 5, our best of 0.75 exceeded the mean score of 0.745.

pdf bib
Multi-Label Classification of Chinese Humor Texts Using Hypergraph Attention Networks
Hao-Chuan Kao | Man-Chen Hung | Lung-Hao Lee | Yuen-Hsien Tseng
Proceedings of the 33rd Conference on Computational Linguistics and Speech Processing (ROCLING 2021)

We use Hypergraph Attention Networks (HyperGAT) to recognize multiple labels of Chinese humor texts. We firstly represent a joke as a hypergraph. The sequential hyperedge and semantic hyperedge structures are used to construct hyperedges. Then, attention mechanisms are adopted to aggregate context information embedded in nodes and hyperedges. Finally, we use trained HyperGAT to complete the multi-label classification task. Experimental results on the Chinese humor multi-label dataset showed that HyperGAT model outperforms previous sequence-based (CNN, BiLSTM, FastText) and graph-based (Graph-CNN, TextGCN, Text Level GNN) deep learning models.

pdf bib
NCU-NLP at ROCLING-2021 Shared Task: Using MacBERT Transformers for Dimensional Sentiment Analysis
Man-Chen Hung | Chao-Yi Chen | Pin-Jung Chen | Lung-Hao Lee
Proceedings of the 33rd Conference on Computational Linguistics and Speech Processing (ROCLING 2021)

We use the MacBERT transformers and fine-tune them to ROCLING-2021 shared tasks using the CVAT and CVAS data. We compare the performance of MacBERT with the other two transformers BERT and RoBERTa in the valence and arousal dimensions, respectively. MAE and correlation coefficient (r) were used as evaluation metrics. On ROCLING-2021 test set, our used MacBERT model achieves 0.611 of MAE and 0.904 of r in the valence dimensions; and 0.938 of MAE and 0.549 of r in the arousal dimension.