Huilin Zhong
2020
An Element-aware Multi-representation Model for Law Article Prediction
Huilin Zhong
|
Junsheng Zhou
|
Weiguang Qu
|
Yunfei Long
|
Yanhui Gu
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Existing works have proved that using law articles as external knowledge can improve the performance of the Legal Judgment Prediction. However, they do not fully use law article information and most of the current work is only for single label samples. In this paper, we propose a Law Article Element-aware Multi-representation Model (LEMM), which can make full use of law article information and can be used for multi-label samples. The model uses the labeled elements of law articles to extract fact description features from multiple angles. It generates multiple representations of a fact for classification. Every label has a law-aware fact representation to encode more information. To capture the dependencies between law articles, the model also introduces a self-attention mechanism between multiple representations. Compared with baseline models like TopJudge, this model improves the accuracy of 5.84%, the macro F1 of 6.42%, and the micro F1 of 4.28%.
Search