Fei Dong
2017
Attention-based Recurrent Convolutional Neural Network for Automatic Essay Scoring
Fei Dong
|
Yue Zhang
|
Jie Yang
Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017)
Neural network models have recently been applied to the task of automatic essay scoring, giving promising results. Existing work used recurrent neural networks and convolutional neural networks to model input essays, giving grades based on a single vector representation of the essay. On the other hand, the relative advantages of RNNs and CNNs have not been compared. In addition, different parts of the essay can contribute differently for scoring, which is not captured by existing models. We address these issues by building a hierarchical sentence-document model to represent essays, using the attention mechanism to automatically decide the relative weights of words and sentences. Results show that our model outperforms the previous state-of-the-art methods, demonstrating the effectiveness of the attention mechanism.
Neural Reranking for Named Entity Recognition
Jie Yang
|
Yue Zhang
|
Fei Dong
Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017
We propose a neural reranking system for named entity recognition (NER), leverages recurrent neural network models to learn sentence-level patterns that involve named entity mentions. In particular, given an output sentence produced by a baseline NER model, we replace all entity mentions, such as Barack Obama, into their entity types, such as PER. The resulting sentence patterns contain direct output information, yet is less sparse without specific named entities. For example, “PER was born in LOC” can be such a pattern. LSTM and CNN structures are utilised for learning deep representations of such sentences for reranking. Results show that our system can significantly improve the NER accuracies over two different baselines, giving the best reported results on a standard benchmark.
Neural Word Segmentation with Rich Pretraining
Jie Yang
|
Yue Zhang
|
Fei Dong
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Neural word segmentation research has benefited from large-scale raw texts by leveraging them for pretraining character and word embeddings. On the other hand, statistical segmentation research has exploited richer sources of external information, such as punctuation, automatic segmentation and POS. We investigate the effectiveness of a range of external training sources for neural word segmentation by building a modular segmentation model, pretraining the most important submodule using rich external sources. Results show that such pretraining significantly improves the model, leading to accuracies competitive to the best methods on six benchmarks.
2016
Automatic Features for Essay Scoring – An Empirical Study
Fei Dong
|
Yue Zhang
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing