<?xml version="1.0" encoding="UTF-8" ?>
<volume id="I17">
  <paper id="4000">
    <title>Proceedings of the IJCNLP 2017, Shared Tasks</title>
    <editor>Chao-Hong Liu</editor>
    <editor>Preslav Nakov</editor>
    <editor>Nianwen Xue</editor>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <url>http://www.aclweb.org/anthology/I17-4</url>
    <bibtype>book</bibtype>
    <bibkey>I17-4:2017</bibkey>
  </paper>

  <paper id="4001">
    <title>IJCNLP-2017 Task 1: Chinese Grammatical Error Diagnosis</title>
    <author><first>Gaoqi</first><last>RAO</last></author>
    <author><first>Baolin</first><last>Zhang</last></author>
    <author><first>Endong</first><last>XUN</last></author>
    <author><first>Lung-Hao</first><last>Lee</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>1&#8211;8</pages>
    <url>http://www.aclweb.org/anthology/I17-4001</url>
    <abstract>This paper presents the IJCNLP 2017 shared task for Chinese grammatical error
	diagnosis (CGED) which seeks to identify grammatical error types and their
	range of occurrence within sentences written by learners of Chinese as foreign
	language. We describe the task definition, data preparation, performance
	metrics, and evaluation results. Of the 13 teams registered for this shared
	task, 5 teams developed the system and submitted a total of 13 runs. We
	expected this evaluation campaign could lead to the development of more
	advanced NLP techniques for educational applications, especially for Chinese
	error detection. All data sets with gold standards and scoring scripts are made
	publicly available to researchers.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>rao-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4002">
    <title>IJCNLP-2017 Task 2: Dimensional Sentiment Analysis for Chinese Phrases</title>
    <author><first>Liang-Chih</first><last>Yu</last></author>
    <author><first>Lung-Hao</first><last>Lee</last></author>
    <author><first>Jin</first><last>Wang</last></author>
    <author><first>Kam-Fai</first><last>Wong</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>9&#8211;16</pages>
    <url>http://www.aclweb.org/anthology/I17-4002</url>
    <abstract>This paper presents the IJCNLP 2017 shared task on Dimensional Sentiment
	Analysis for Chinese Phrases (DSAP) which seeks to identify a real-value
	sentiment score of Chinese single words and multi-word phrases in the both
	valence and arousal dimensions. Valence represents the degree of pleasant and
	unpleasant (or positive and negative) feelings, and arousal represents the
	degree of excitement and calm. Of the 19 teams registered for this shared task
	for two-dimensional sentiment analysis, 13 submitted results. We expected that
	this evaluation campaign could produce more advanced dimensional sentiment
	analysis techniques, especially for Chinese affective computing. All data sets
	with gold standards and scoring script are made publicly available to
	researchers.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>yu-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4003">
    <title>IJCNLP-2017 Task 3: Review Opinion Diversification (RevOpiD-2017)</title>
    <author><first>Anil</first><last>Kumar Singh</last></author>
    <author><first>Avijit</first><last>Thawani</last></author>
    <author><first>Mayank</first><last>Panchal</last></author>
    <author><first>Anubhav</first><last>Gupta</last></author>
    <author><first>Julian</first><last>McAuley</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>17&#8211;25</pages>
    <url>http://www.aclweb.org/anthology/I17-4003</url>
    <abstract>Unlike Entity Disambiguation in web
	search results, Opinion Disambiguation is
	a relatively unexplored topic. RevOpiD
	shared task at IJCNLP-2107 aimed to attract
	attention towards this research problem.
	In this paper, we summarize the first
	run of this task and introduce a new dataset
	that we have annotated for the purpose of
	evaluating Opinion Mining, Summarization
	and Disambiguation methods.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>kumarsingh-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4004">
    <title>IJCNLP-2017 Task 4: Customer Feedback Analysis</title>
    <author><first>Chao-Hong</first><last>Liu</last></author>
    <author><first>Yasufumi</first><last>Moriya</last></author>
    <author><first>Alberto</first><last>Poncelas</last></author>
    <author><first>Declan</first><last>Groves</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>26&#8211;33</pages>
    <url>http://www.aclweb.org/anthology/I17-4004</url>
    <abstract>This document introduces the IJCNLP 2017 Shared Task on Customer Feedback
	Analysis.  In this shared task we have prepared corpora of customer feedback in
	four languages, i.e. English, French, Spanish and Japanese.  They were
	annotated in a common meanings categorization, which was improved from an
	ADAPT-Microsoft pivot study on customer feedback.  Twenty teams participated in
	the shared task and twelve of them have submitted prediction results.  The
	results show that performance of prediction meanings of customer feedback is
	reasonable well in four languages. Nine system description papers are archived
	in the shared tasks proceeding.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>liu-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4005">
    <title>IJCNLP-2017 Task 5: Multi-choice Question Answering in Examinations</title>
    <author><first>Shangmin</first><last>Guo</last></author>
    <author><first>Kang</first><last>Liu</last></author>
    <author><first>Shizhu</first><last>He</last></author>
    <author><first>Cao</first><last>Liu</last></author>
    <author><first>Jun</first><last>Zhao</last></author>
    <author><first>Zhuoyu</first><last>Wei</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>34&#8211;40</pages>
    <url>http://www.aclweb.org/anthology/I17-4005</url>
    <abstract>The IJCNLP-2017 Multi-choice Question
	Answering(MCQA) task aims at exploring
	the performance of current Question
	Answering(QA) techniques via the realworld
	complex questions collected from
	Chinese Senior High School Entrance Examination
	papers and CK12 website1. The
	questions are all 4-way multi-choice questions
	writing in Chinese and English respectively
	that cover a wide range of subjects,
	e.g. Biology, History, Life Science
	and etc. And, all questions are restrained
	within the elementary and middle school
	level. During the whole procedure of this
	task, 7 teams submitted 323 runs in total.
	This paper describes the collected data, the
	format and size of these questions, formal
	run statistics and results, overview and
	performance statistics of different methods</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>guo-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4006">
    <title>Alibaba at IJCNLP-2017 Task 1: Embedding Grammatical Features into LSTMs for Chinese Grammatical Error Diagnosis Task</title>
    <author><first>Yi</first><last>yang</last></author>
    <author><first>Pengjun</first><last>Xie</last></author>
    <author><first>Jun</first><last>tao</last></author>
    <author><first>Guangwei</first><last>xu</last></author>
    <author><first>Linlin</first><last>li</last></author>
    <author><first>Si</first><last>luo</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>41&#8211;46</pages>
    <url>http://www.aclweb.org/anthology/I17-4006</url>
    <abstract>This paper introduces Alibaba NLP team system on IJCNLP 2017 shared task No. 1
	Chinese Grammatical Error Diagnosis (CGED). The task is to diagnose four types
	of grammatical errors which are re- dundant words (R), missing words (M), bad
	word selection (S) and disordered words (W). We treat the task as a sequence
	tagging problem and design some hand- craft features to solve it. Our system is
	mainly based on the LSTM-CRF model and 3 ensemble strategies are applied to
	improve the performance. At the identifi- cation level and the position level
	our sys- tem gets the highest F1 scores. At the posi- tion level, which is the
	most difficult level, we perform best on all metrics.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>yang-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4007">
    <title>THU_NGN at IJCNLP-2017 Task 2: Dimensional Sentiment Analysis for Chinese Phrases with Deep LSTM</title>
    <author><first>Chuhan</first><last>Wu</last></author>
    <author><first>Fangzhao</first><last>Wu</last></author>
    <author><first>Yongfeng</first><last>Huang</last></author>
    <author><first>Sixing</first><last>Wu</last></author>
    <author><first>Zhigang</first><last>Yuan</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>47&#8211;52</pages>
    <url>http://www.aclweb.org/anthology/I17-4007</url>
    <abstract>Predicting valence-arousal ratings for words and phrases is very useful for
	constructing affective resources for dimensional sentiment analysis. Since the
	existing valence-arousal resources of Chinese are mainly in word-level and
	there is a lack of phrase-level ones, the Dimensional Sentiment Analysis for
	Chinese Phrases (DSAP) task aims to predict the valence-arousal ratings for
	Chinese affective words and phrases automatically. In this task, we propose an
	approach using a densely connected LSTM network and word features to identify
	dimensional sentiment on valence and arousal for words and phrases jointly. We
	use word embedding as major feature and choose part of speech (POS) and word
	clusters as additional features to train the dense LSTM network. The evaluation
	results of our submissions (1st and 2nd in average performance) validate the
	effectiveness of our system to predict valence and arousal dimensions for
	Chinese words and phrases.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>wu-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4008">
    <title>IIIT-H at IJCNLP-2017 Task 3: A Bidirectional-LSTM Approach for Review Opinion Diversification</title>
    <author><first>Pruthwik</first><last>Mishra</last></author>
    <author><first>Prathyusha</first><last>Danda</last></author>
    <author><first>Silpa</first><last>Kanneganti</last></author>
    <author><first>Soujanya</first><last>Lanka</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>53&#8211;58</pages>
    <url>http://www.aclweb.org/anthology/I17-4008</url>
    <abstract>The Review Opinion Diversification (Revopid-2017) shared task focuses on
	selecting top-k reviews from a set of reviews for a particular product based on
	a specific criteria. In this paper, we describe our approaches and results for
	modeling the ranking of reviews based on their usefulness score, this being the
	first of the three subtasks under this shared task. Instead of posing this as a
	regression problem, we modeled this as a classification task where we want to
	identify whether a review is useful or not. We employed a bi-directional LSTM
	to represent each review and is used with a softmax layer to predict the
	usefulness score. We chose the review with highest usefulness score, then find
	its cosine similarity score with rest of the reviews. This is done in order to
	ensure diversity in the selection of top-k reviews. On the top-5 list
	prediction, we finished 3rd while in top-10 list one, we are placed
	2nd in the shared task. We have discussed the model and the results in
	detail in the paper.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>mishra-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4009">
    <title>Bingo at IJCNLP-2017 Task 4: Augmenting Data using Machine Translation for Cross-linguistic Customer Feedback Classification</title>
    <author><first>Heba</first><last>Elfardy</last></author>
    <author><first>Manisha</first><last>Srivastava</last></author>
    <author><first>Wei</first><last>Xiao</last></author>
    <author><first>Jared</first><last>Kramer</last></author>
    <author><first>Tarun</first><last>Agarwal</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>59&#8211;66</pages>
    <url>http://www.aclweb.org/anthology/I17-4009</url>
    <abstract>The ability to automatically and accurately process customer feedback is a
	necessity in the private sector. Unfortunately, customer feedback can be one of
	the most difficult types of data to work with due to the sheer volume and
	variety of services, products, languages, and cultures that comprise the
	customer experience. In order to address this issue, our team built a suite of
	classifiers trained on a four-language, multi-label corpus released as part of
	the shared task on "Customer Feedback Analysis" at IJCNLP 2017. In addition to
	standard text preprocessing, we translated each dataset into each other
	language to increase the size of the training datasets. Additionally, we also
	used word embeddings in our feature engineering step. Ultimately, we trained
	classifiers using Logistic Regression, Random Forest, and Long Short-Term
	Memory (LSTM) Recurrent Neural Networks. Overall, we achieved a Macro-Average
	F-score between 48.7% and 56.0% for the four languages and ranked 3/12 for
	English, 3/7 for Spanish, 1/8 for French, and 2/7 for Japanese.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>elfardy-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4010">
    <title>ADAPT Centre Cone Team at IJCNLP-2017 Task 5: A Similarity-Based Logistic Regression Approach to Multi-choice Question Answering in an Examinations Shared Task</title>
    <author><first>Daria</first><last>Dzendzik</last></author>
    <author><first>Alberto</first><last>Poncelas</last></author>
    <author><first>Carl</first><last>Vogel</last></author>
    <author><first>Qun</first><last>Liu</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>67&#8211;72</pages>
    <url>http://www.aclweb.org/anthology/I17-4010</url>
    <abstract>We describe the work of a team from the ADAPT
	Centre in Ireland in addressing automatic answer selection for the
	Multi-choice Question Answering in Examinations shared task.  The
	system is based on a logistic regression over the string similarities
	between question, answer, and additional text.                                     
	We
	obtain the
	highest
	grade out of six systems: 48.7% accuracy on a validation set (vs. a
	baseline of 29.45%) and 45.6% on a test set.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>dzendzik-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4011">
    <title>YNU-HPCC at IJCNLP-2017 Task 1: Chinese Grammatical Error Diagnosis Using a Bi-directional LSTM-CRF Model</title>
    <author><first>Quanlei</first><last>Liao</last></author>
    <author><first>Jin</first><last>Wang</last></author>
    <author><first>Jinnan</first><last>Yang</last></author>
    <author><first>Xuejie</first><last>Zhang</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>73&#8211;77</pages>
    <url>http://www.aclweb.org/anthology/I17-4011</url>
    <abstract>Building a system to detect Chinese grammatical errors is a challenge for
	natural-language processing researchers. As Chinese learners are increasing,
	developing such a system can help them study Chinese more easily. This paper
	introduces a bi-directional long short-term memory (BiLSTM) - conditional
	random field (CRF) model to produce the sequences that indicate an error type
	for every position of a sentence, since we regard Chinese grammatical error
	diagnosis (CGED) as a sequence-labeling problem.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>liao-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4012">
    <title>CVTE at IJCNLP-2017 Task 1: Character Checking System for Chinese Grammatical Error Diagnosis Task</title>
    <author><first>Xian</first><last>Li</last></author>
    <author><first>Peng</first><last>Wang</last></author>
    <author><first>Suixue</first><last>Wang</last></author>
    <author><first>Guanyu</first><last>Jiang</last></author>
    <author><first>Tianyuan</first><last>You</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>78&#8211;83</pages>
    <url>http://www.aclweb.org/anthology/I17-4012</url>
    <abstract>Grammatical error diagnosis is an important task in natural language
	processing. This paper introduces CVTE Character Checking System in the
	NLP-TEA-4 shared task for CGED 2017, we use Bi-LSTM to generate the probability
	of every character, then take two kinds of strategies to decide whether a
	character is correct or not. This system is probably more suitable to deal with
	the error type of bad word selection, which is one of four types of errors, and
	the rest are words re-dundancy, words missing and words disorder. Finally the
	second strategy achieves better F1 score than the first one at all of detection
	level, identification level, position level.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>li-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4013">
    <title>LDCCNLP at IJCNLP-2017 Task 2: Dimensional Sentiment Analysis for Chinese Phrases Using Machine Learning</title>
    <author><first>Peng</first><last>Zhong</last></author>
    <author><first>Jingbin</first><last>Wang</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>84&#8211;88</pages>
    <url>http://www.aclweb.org/anthology/I17-4013</url>
    <abstract>Sentiment analysis on Chinese text has intensively studied. The basic task for
	related research is to construct an affective lexicon and thereby predict
	emotional scores of different levels. However, finite lexicon resources make it
	difficult to effectively and automatically distinguish between various types of
	sentiment information in Chinese texts. This IJCNLP2017-Task2 competition seeks
	to automatically calculate Valence and Arousal ratings within the hierarchies
	of vocabulary and phrases in Chinese. We introduce a regression methodology to
	automatically recognize continuous emotional values, and incorporate a word
	embedding technique. In our system, the MAE predictive values of Valence and
	Arousal were 0.811 and 0.996, respectively, for the sentiment dimension
	prediction of words in Chinese. In phrase prediction, the corresponding results
	were 0.822 and 0.489, ranking sixth among all teams.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>zhong-wang:2017:I17-4</bibkey>
  </paper>

  <paper id="4014">
    <title>CKIP at IJCNLP-2017 Task 2: Neural Valence-Arousal Prediction for Phrases</title>
    <author><first>Peng-Hsuan</first><last>Li</last></author>
    <author><first>Wei-Yun</first><last>Ma</last></author>
    <author><first>Hsin-Yang</first><last>Wang</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>89&#8211;94</pages>
    <url>http://www.aclweb.org/anthology/I17-4014</url>
    <abstract>CKIP takes part in solving the Dimensional Sentiment Analysis for Chinese
	Phrases (DSAP) share task of IJCNLP 2017. This task calls for systems that can
	predict the valence and the arousal of Chinese phrases, which are real values
	between 1 and 9. To achieve this, functions mapping Chinese character sequences
	to real numbers are built by regression techniques. In addition, the CKIP
	phrase Valence-Arousal (VA) predictor depends on knowledge of modifier words
	and head words. This includes the types of known modifier words, VA of head
	words, and distributional semantics of both these words. The predictor took the
	second place out of 13 teams on phrase VA prediction, with 0.444 MAE and 0.935
	PCC on valence, and 0.395 MAE and 0.904 PCC on arousal.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>li-ma-wang:2017:I17-4</bibkey>
  </paper>

  <paper id="4015">
    <title>CIAL at IJCNLP-2017 Task 2: An Ensemble Valence-Arousal Analysis System for Chinese Words and Phrases</title>
    <author><first>Zheng-Wen</first><last>Lin</last></author>
    <author><first>Yung-Chun</first><last>Chang</last></author>
    <author><first>Chen-Ann</first><last>Wang</last></author>
    <author><first>Yu-Lun</first><last>Hsieh</last></author>
    <author><first>Wen-Lian</first><last>Hsu</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>95&#8211;99</pages>
    <url>http://www.aclweb.org/anthology/I17-4015</url>
    <abstract>Sentiment lexicon is very helpful in dimensional sentiment applications.
	Because of countless Chinese words, developing a method to predict unseen
	Chinese words is required. The proposed method can handle both words and
	phrases by using an ADVWeight List for word prediction, which in turn improves
	our performance at phrase level. The evaluation results demonstrate that our
	system is effective in dimensional sentiment analysis for Chinese phrases. The
	Mean Absolute Error (MAE) and Pearson's Correlation Coefficient (PCC) for
	Valence are 0.723 and 0.835, respectively, and those for Arousal are 0.914 and
	0.756, respectively.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>lin-EtAl:2017:I17-41</bibkey>
  </paper>

  <paper id="4016">
    <title>Alibaba at IJCNLP-2017 Task 2: A Boosted Deep System for Dimensional Sentiment Analysis of Chinese Phrases</title>
    <author><first>Xin</first><last>Zhou</last></author>
    <author><first>Jian</first><last>Wang</last></author>
    <author><first>Xu</first><last>Xie</last></author>
    <author><first>Changlong</first><last>Sun</last></author>
    <author><first>Luo</first><last>Si</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>100&#8211;104</pages>
    <url>http://www.aclweb.org/anthology/I17-4016</url>
    <abstract>This paper introduces Team Alibaba's systems participating IJCNLP 2017 shared
	task No. 2 Dimensional Sentiment Analysis for Chinese Phrases (DSAP). The
	systems mainly utilize a multi-layer neural networks, with multiple features
	input such as word embedding, part-of-speech-tagging (POST), word clustering,
	prefix type, character embedding, cross sentiment input, and AdaBoost method
	for model training. For word level task our best run achieved MAE 0.545 (ranked
	2nd), PCC 0.892 (ranked 2nd) in valence prediction and MAE 0.857 (ranked 1st),
	PCC 0.678 (ranked 2nd) in arousal prediction. For average performance of word
	and phrase task we achieved MAE 0.5355 (ranked 3rd), PCC 0.8965 (ranked 3rd) in
	valence prediction and MAE 0.661 (ranked 3rd), PCC 0.766 (ranked 2nd) in
	arousal prediction. In the final our submitted system achieved 2nd in mean
	rank.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>zhou-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4017">
    <title>NLPSA at IJCNLP-2017 Task 2: Imagine Scenario: Leveraging Supportive Images for Dimensional Sentiment Analysis</title>
    <author><first>szu-min</first><last>chen</last></author>
    <author><first>Zi-Yuan</first><last>Chen</last></author>
    <author><first>Lun-Wei</first><last>Ku</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>105&#8211;111</pages>
    <url>http://www.aclweb.org/anthology/I17-4017</url>
    <abstract>Categorical sentiment classification has drawn much attention in the field of
	NLP, while less work has been conducted for dimensional sentiment analysis
	(DSA). Recent works for DSA utilize either word embedding, knowledge base
	features, or bilingual language resources. In this paper, we propose our model
	for IJCNLP 2017 Dimensional Sentiment Analysis for Chinese Phrases shared task.
	Our model incorporates word embedding as well as image features, attempting to
	simulate human's imaging behavior toward sentiment analysis. Though the
	performance is not comparable to others in the end, we conduct several
	experiments with possible reasons discussed, and analyze the drawbacks of our
	model.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>chen-chen-ku:2017:I17-4</bibkey>
  </paper>

  <paper id="4018">
    <title>NCYU at IJCNLP-2017 Task 2: Dimensional Sentiment Analysis for Chinese Phrases using Vector Representations</title>
    <author><first>Jui-Feng</first><last>Yeh</last></author>
    <author><first>Jian-Cheng</first><last>Tsai</last></author>
    <author><first>Bo-Wei</first><last>Wu</last></author>
    <author><first>Tai-You</first><last>Kuang</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>112&#8211;117</pages>
    <url>http://www.aclweb.org/anthology/I17-4018</url>
    <abstract>This paper presents two vector representations proposed by National Chiayi
	University (NCYU) about phrased-based sentiment detection which was used to
	compete in dimensional sentiment analysis for Chinese phrases (DSACP) at IJCNLP
	2017. The vector-based sentiment phraselike unit analysis models are proposed
	in this article. E-HowNet-based clustering is used to obtain the values of
	valence and arousal for sentiment words first. An out-of-vocabulary function is
	also defined in this article to measure the dimensional emotion values for
	unknown words. For predicting the corresponding values of sentiment phrase-like
	unit, a vectorbased approach is proposed here. According to the experimental
	results, we can find the proposed approach is efficacious.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>yeh-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4019">
    <title>MainiwayAI at IJCNLP-2017 Task 2: Ensembles of Deep Architectures for Valence-Arousal Prediction</title>
    <author><first>Yassine</first><last>Benajiba</last></author>
    <author><first>Jin</first><last>Sun</last></author>
    <author><first>Yong</first><last>Zhang</last></author>
    <author><first>Zhiliang</first><last>Weng</last></author>
    <author><first>Or</first><last>Biran</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>118&#8211;123</pages>
    <url>http://www.aclweb.org/anthology/I17-4019</url>
    <abstract>This paper introduces Mainiway AI Labs submitted system for the IJCNLP 2017
	shared task on Dimensional Sentiment Analysis of Chinese Phrases (DSAP), and
	related experiments. Our approach con- sists of deep neural networks with vari-
	ous architectures, and our best system is a voted ensemble of networks. We
	achieve a Mean Absolute Error of 0.64 in valence prediction and 0.68 in arousal
	prediction on the test set, both placing us as the 5th ranked team in the
	competition.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>benajiba-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4020">
    <title>NCTU-NTUT at IJCNLP-2017 Task 2: Deep Phrase Embedding using bi-LSTMs for Valence-Arousal Ratings Prediction of Chinese Phrases</title>
    <author><first>Yen-Hsuan</first><last>Lee</last></author>
    <author><first>Han-Yun</first><last>Yeh</last></author>
    <author><first>Yih-Ru</first><last>Wang</last></author>
    <author><first>Yuan-Fu</first><last>Liao</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>124&#8211;129</pages>
    <url>http://www.aclweb.org/anthology/I17-4020</url>
    <abstract>In this paper, a deep phrase embedding approach using bi-directional long
	short-term memory (Bi-LSTM) is proposed to predict the valence-arousal ratings
	of Chinese words and phrases. It adopts a Chinese word segmentation frontend, a
	local order-aware word, a global phrase embedding representations and a deep
	regression neural network (DRNN) model. The performance of the proposed method
	was benchmarked by the IJCNLP 2017 shared task 2. According the official
	evaluation results, our best system achieved mean rank 6.5 among all 24
	submissions.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>lee-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4021">
    <title>NTOUA at IJCNLP-2017 Task 2: Predicting Sentiment Scores of Chinese Words and Phrases</title>
    <author><first>Chuan-Jie</first><last>Lin</last></author>
    <author><first>Hao-Tsung</first><last>Chang</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>130&#8211;133</pages>
    <url>http://www.aclweb.org/anthology/I17-4021</url>
    <abstract>This paper describes the approaches of sentimental score prediction in the NTOU
	DSA system participating in DSAP this year.  The modules to predict scores for
	words are adapted from our system last year.  The approach to predict scores
	for phrases is keyword-based machine learning method.  The performance of our
	system is good in predicting scores of phrases.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>lin-chang:2017:I17-4</bibkey>
  </paper>

  <paper id="4022">
    <title>CYUT at IJCNLP-2017 Task 3: System Report for Review Opinion Diversification</title>
    <author><first>Shih-Hung</first><last>Wu</last></author>
    <author><first>Su-Yu</first><last>Chang</last></author>
    <author><first>Liang-Pu</first><last>Chen</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>134&#8211;137</pages>
    <url>http://www.aclweb.org/anthology/I17-4022</url>
    <abstract>Review Opinion Diversification (RevOpiD) 2017 is a shared task which is held in
	International Joint Conference on Natural Language Processing (IJCNLP). The
	shared task aims at selecting top-k reviews, as a summary, from a set of
	re-views. There are three subtasks in RevOpiD: helpfulness ranking,
	rep-resentativeness ranking, and ex-haustive coverage ranking. This year, our
	team submitted runs by three models. We focus on ranking reviews based on the
	helpfulness of the reviews. In the first two models, we use linear regression
	with two different loss functions. First one is least squares, and second one
	is cross entropy. The third run is a random baseline. For both k=5 and k=10,
	our second model gets the best scores in the official evaluation metrics.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>wu-chang-chen:2017:I17-4</bibkey>
  </paper>

  <paper id="4023">
    <title>JUNLP at IJCNLP-2017 Task 3: A Rank Prediction Model for Review Opinion Diversification</title>
    <author><first>Monalisa</first><last>Dey</last></author>
    <author><first>Anupam</first><last>Mondal</last></author>
    <author><first>Dipankar</first><last>Das</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>138&#8211;142</pages>
    <url>http://www.aclweb.org/anthology/I17-4023</url>
    <abstract>IJCNLP-17 Review Opinion Diversification (RevOpiD-2017) task has been designed
	for ranking the top-k reviews of a product from a set of reviews, which assists
	in identifying a summarized output to express the opinion of the entire review
	set. The task is divided into three independent subtasks as
	subtask-A,subtask-B, and subtask-C. Each of these three subtasks selects the
	top-k reviews based on helpfulness, representativeness, and exhaustiveness of
	the opinions expressed in the review set individually. In order to develop the
	modules and predict the rank of reviews for all three subtasks, we have
	employed two well-known supervised classifiers namely, Na¨ıve Bayes and
	Logistic Regression on the top of several extracted features such as the number
	of nouns, number of verbs, and number of sentiment words etc from the provided
	datasets. Finally, the organizers have helped to validate the predicted outputs
	for all three subtasks by using their evaluation metrics. The metrics provide
	the scores of list size 5 as (0.80 (mth)) for subtask-A, (0.86 (cos), 0.87 (cos
	d), 0.71 (cpr), 4.98 (a-dcg), and 556.94 (wt)) for subtask B, and (10.94 (unwt)
	and 0.67 (recall)) for subtask C individually.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>dey-mondal-das:2017:I17-4</bibkey>
  </paper>

  <paper id="4024">
    <title>All-In-1 at IJCNLP-2017 Task 4: Short Text Classification with One Model for All Languages</title>
    <author><first>Barbara</first><last>Plank</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>143&#8211;148</pages>
    <url>http://www.aclweb.org/anthology/I17-4024</url>
    <abstract>We present All-In-1, a simple model for multilingual text classification that
	does not require any parallel data. It is based on a traditional Support Vector
	Machine classifier exploiting multilingual word embeddings and character
	n-grams. Our model is simple, easily extendable yet very effective, overall
	ranking 1st (out of 12 teams) in the IJCNLP 2017 shared task on customer
	feedback analysis in four languages: English, French, Japanese and Spanish.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>plank:2017:I17-4</bibkey>
  </paper>

  <paper id="4025">
    <title>SentiNLP at IJCNLP-2017 Task 4: Customer Feedback Analysis Using a Bi-LSTM-CNN Model</title>
    <author><first>Shuying</first><last>Lin</last></author>
    <author><first>Huosheng</first><last>Xie</last></author>
    <author><first>Liang-Chih</first><last>Yu</last></author>
    <author><first>K. Robert</first><last>Lai</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>149&#8211;154</pages>
    <url>http://www.aclweb.org/anthology/I17-4025</url>
    <abstract>The analysis of customer feedback is useful to provide good customer service.
	There are a lot of online customer feedback are produced. Manual classification
	is impractical because the high volume of data. Therefore, the automatic
	classification of the customer feedback is of importance for the analysis
	system to identify meanings or intentions that the customer express. The aim of
	shared Task 4 of IJCNLP 2017 is to classify the customer feedback into six tags
	categorization. In this paper, we present a system that uses word embeddings to
	express the feature of the sentence in the corpus and the neural network as the
	classifier to complete the shared task. And then the ensemble method is used to
	get final predictive result. The proposed method get ranked first among twelve
	teams in terms of micro-averaged F1 and second for accura-cy metric.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>lin-EtAl:2017:I17-42</bibkey>
  </paper>

  <paper id="4026">
    <title>IIIT-H at IJCNLP-2017 Task 4: Customer Feedback Analysis using Machine Learning and Neural Network Approaches</title>
    <author><first>Prathyusha</first><last>Danda</last></author>
    <author><first>Pruthwik</first><last>Mishra</last></author>
    <author><first>Silpa</first><last>Kanneganti</last></author>
    <author><first>Soujanya</first><last>Lanka</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>155&#8211;160</pages>
    <url>http://www.aclweb.org/anthology/I17-4026</url>
    <abstract>The IJCNLP 2017 shared task on Customer Feedback Analysis focuses on
	classifying customer feedback into one of a predefined set of categories or
	classes. In this paper, we describe our approach to this problem and the
	results on four languages, i.e. English, French, Japanese and Spanish. Our
	system implemented a bidirectional LSTM using
	pre-trained glove and fastText
	embeddings, and SVM  with TF-IDF vectors for
	classifying the feedback data which is described in the later sections. We also
	tried different machine learning techniques and compared the results in this
	paper. Out of the 12 participating teams, our systems obtained 0.65, 0.86, 0.70
	and 0.56 exact accuracy score in English, Spanish, French and Japanese
	respectively. We observed that our systems perform better than the baseline
	systems in three languages while we match the baseline accuracy for Japanese on
	our submitted systems. We noticed significant improvements in Japanese in later
	experiments, matching the highest performing system that was submitted in the
	shared task, which we will discuss in this paper.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>danda-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4027">
    <title>ADAPT at IJCNLP-2017 Task 4: A Multinomial Naive Bayes Classification Approach for Customer Feedback Analysis task</title>
    <author><first>Pintu</first><last>Lohar</last></author>
    <author><first>Koel</first><last>Dutta Chowdhury</last></author>
    <author><first>Haithem</first><last>Afli</last></author>
    <author><first>Mohammed</first><last>Hasanuzzaman</last></author>
    <author><first>Andy</first><last>Way</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>161&#8211;169</pages>
    <url>http://www.aclweb.org/anthology/I17-4027</url>
    <abstract>In this age of the digital economy, promoting organisations attempt their best
	to engage the customers in the feedback provisioning process. With the
	assistance of customer insights, an organisation can develop a better product
	and provide a better service to its customer. In this paper, we analyse the
	real world samples of customer feedback from Microsoft Office customers in four
	languages, i.e., English, French, Spanish and Japanese and conclude a
	five-plus-one-classes categorisation (comment, request, bug, complaint,
	meaningless and undetermined) for meaning classification. The task is to
	%access multilingual corpora annotated by the proposed meaning categorization
	scheme and develop a system to
	determine what class(es) the customer feedback sentences should be annotated as
	in four languages. We propose following approaches to accomplish this task: 
	 (i) a multinomial naive bayes (MNB) approach for multi-label classification, 
	 (ii) MNB with one-vs-rest classifier approach, and 
	 (iii) the combination of the multilabel classification-based and the sentiment
	classification-based approach. 
	Our best system produces F-scores of 0.67, 0.83, 0.72 and 0.7 for English,
	Spanish, French and Japanese, respectively. The results are competitive to the
	best ones for all languages and secure 3rd and 5th position for Japanese and
	French, respectively, among all submitted systems.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>lohar-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4028">
    <title>OhioState at IJCNLP-2017 Task 4: Exploring Neural Architectures for Multilingual Customer Feedback Analysis</title>
    <author><first>Dushyanta</first><last>Dhyani</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>170&#8211;173</pages>
    <url>http://www.aclweb.org/anthology/I17-4028</url>
    <abstract>This paper describes our systems for IJCNLP 2017 Shared Task on Customer
	Feedback Analysis. We experimented with simple neural architectures that gave
	competitive performance on certain tasks. This includes shallow CNN and
	Bi-Directional LSTM architectures with Facebook’s Fasttext as a baseline
	model. Our best performing model was in the Top 5 systems using the
	Exact-Accuracy and Micro-Average-F1 metrics for the Spanish (85.28% for both)
	and French (70% and 73.17% respectively) task, and outperformed all the other
	models on comment (87.28%) and meaningless (51.85%) tags using Micro Average F1
	by Tags metric for the French task.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>dhyani:2017:I17-4</bibkey>
  </paper>

  <paper id="4029">
    <title>YNU-HPCC at IJCNLP-2017 Task 4: Attention-based Bi-directional GRU Model for Customer Feedback Analysis Task of English</title>
    <author><first>Nan</first><last>Wang</last></author>
    <author><first>Jin</first><last>Wang</last></author>
    <author><first>Xuejie</first><last>Zhang</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>174&#8211;179</pages>
    <url>http://www.aclweb.org/anthology/I17-4029</url>
    <abstract>This paper describes our submission to IJCNLP 2017 shared task 4, for
	predicting the tags of unseen customer feedback sentences, such as comments,
	complaints, bugs, requests, and meaningless and undetermined statements. With
	the use of a neural network, a large number of deep learning methods have been
	developed, which perform very well on text classification. Our ensemble
	classification model is based on a bi-directional gated recurrent unit and an
	attention mechanism which shows a 3.8% improvement in classification accuracy.
	To enhance the model performance, we also compared it with several
	word-embedding models. The comparative results show that a combination of both
	word2vec and GloVe achieves the best performance.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>wang-wang-zhang:2017:I17-4</bibkey>
  </paper>

  <paper id="4030">
    <title>NITMZ-JU at IJCNLP-2017 Task 4: Customer Feedback Analysis</title>
    <author><first>Somnath</first><last>Banerjee</last></author>
    <author><first>Partha</first><last>Pakray</last></author>
    <author><first>Riyanka</first><last>Manna</last></author>
    <author><first>Dipankar</first><last>Das</last></author>
    <author><first>Alexander</first><last>Gelbukh</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>180&#8211;183</pages>
    <url>http://www.aclweb.org/anthology/I17-4030</url>
    <abstract>In this paper, we describe a deep learning framework for analyzing the customer
	feedback as part of our participation in the
	shared task on Customer Feedback Analysis at the 8th International Joint
	Conference on Natural Language Processing
	(IJCNLP 2017). A Convolutional Neural Network (CNN) based deep neural network
	model was employed for the customer feedback task. The proposed system
	was evaluated on two languages, namely,
	English and French.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>banerjee-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4031">
    <title>IITP at IJCNLP-2017 Task 4: Auto Analysis of Customer Feedback using CNN and GRU Network</title>
    <author><first>Deepak</first><last>Gupta</last></author>
    <author><first>Pabitra</first><last>Lenka</last></author>
    <author><first>Harsimran</first><last>Bedi</last></author>
    <author><first>Asif</first><last>Ekbal</last></author>
    <author><first>Pushpak</first><last>Bhattacharyya</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>184&#8211;193</pages>
    <url>http://www.aclweb.org/anthology/I17-4031</url>
    <abstract>Analyzing customer feedback is the best
	way to channelize the data into new marketing
	strategies that benefit entrepreneurs
	as well as customers. Therefore an automated
	system which can analyze the
	customer behavior is in great demand.
	Users may write feedbacks in any language,
	and hence mining appropriate information
	often becomes intractable. Especially
	in a traditional feature-based supervised
	model, it is difficult to build
	a generic system as one has to understand
	the concerned language for finding
	the relevant features. In order to
	overcome this, we propose deep Convolutional
	Neural Network (CNN) and Recurrent
	Neural Network (RNN) based approaches
	that do not require handcrafting
	of features. We evaluate these techniques
	for analyzing customer feedback
	sentences on four languages, namely English,
	French, Japanese and Spanish. Our
	empirical analysis shows that our models
	perform well in all the four languages
	on the setups of IJCNLP Shared Task on
	Customer Feedback Analysis. Our model
	achieved the second rank in French, with
	an accuracy of 71.75% and third ranks for
	all the other languages.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>gupta-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4032">
    <title>YNUDLG at IJCNLP-2017 Task 5: A CNN-LSTM Model with Attention for Multi-choice Question Answering in Examinations</title>
    <author><first>Min</first><last>Wang</last></author>
    <author><first>Qingxun</first><last>Liu</last></author>
    <author><first>Peng</first><last>Ding</last></author>
    <author><first>Yongbin</first><last>Li</last></author>
    <author><first>Xiaobing</first><last>Zhou</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>194&#8211;198</pages>
    <url>http://www.aclweb.org/anthology/I17-4032</url>
    <abstract>In this paper, we perform convolutional neural networks (CNN) to learn the
	joint representations of question-answer pairs first, then use the joint
	representations as the inputs of the long short-term memory (LSTM) with
	attention to learn the answer sequence of a question for labeling the matching
	quality of each answer. We also incorporating external knowledge by training
	Word2Vec on Flashcards data, thus we get more compact embedding. Experimental
	results show that our method achieves better or comparable performance compared
	with the baseline system. The proposed approach achieves the accuracy of 0.39,
	0.42 in English valid set, test set, respectively.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>wang-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4033">
    <title>ALS at IJCNLP-2017 Task 5: Answer Localization System for Multi-Choice Question Answering in Exams</title>
    <author><first>Changliang</first><last>Li</last></author>
    <author><first>Cunliang</first><last>Kong</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>199&#8211;202</pages>
    <url>http://www.aclweb.org/anthology/I17-4033</url>
    <abstract>Multi-choice question answering in exams is a typical QA task. To accomplish
	this task, we present an answer localization method to locate answers shown in
	web pages, considering structural information and semantic information both.
	Using this method as basis, we analyze sentences and paragraphs appeared on web
	pages to get predictions. With this answer localization system, we get
	effective results on both validation dataset and test dataset.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>li-kong:2017:I17-4</bibkey>
  </paper>

  <paper id="4034">
    <title>MappSent at IJCNLP-2017 Task 5: A Textual Similarity Approach Applied to Multi-choice Question Answering in Examinations</title>
    <author><first>Amir</first><last>Hazem</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>203&#8211;207</pages>
    <url>http://www.aclweb.org/anthology/I17-4034</url>
    <abstract>In this paper we present MappSent, a textual similarity approach that we 
	applied  to the multi-choice question answering in exams shared task. MappSent
	has initially been  proposed for question-to-question similarity hazem2017. In
	this work, we present the results of two adaptations of MappSent for the
	question answering task on the English dataset.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>hazem:2017:I17-4</bibkey>
  </paper>

  <paper id="4035">
    <title>YNU-HPCC at IJCNLP-2017 Task 5: Multi-choice Question Answering in Exams Using an Attention-based LSTM Model</title>
    <author><first>Hang</first><last>Yuan</last></author>
    <author><first>You</first><last>Zhang</last></author>
    <author><first>Jin</first><last>Wang</last></author>
    <author><first>Xuejie</first><last>Zhang</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>208&#8211;212</pages>
    <url>http://www.aclweb.org/anthology/I17-4035</url>
    <abstract>A shared task is a typical question answering task that aims to test how
	accurately the participants can answer the questions in exams. Typically, for
	each question, there are four candidate answers, and only one of the answers is
	correct. The existing methods for such a task usually implement a recurrent
	neural network (RNN) or long short-term memory (LSTM). However, both RNN and
	LSTM are biased models in which the words in the tail of a sentence are more
	dominant than the words in the header. In this paper, we propose the use of an
	attention-based LSTM (AT-LSTM) model for these tasks. By adding an attention
	mechanism to the standard LSTM, this model can more easily capture long
	contextual information.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>yuan-EtAl:2017:I17-4</bibkey>
  </paper>

  <paper id="4036">
    <title>JU NITM at IJCNLP-2017 Task 5: A Classification Approach for Answer Selection in Multi-choice Question Answering System</title>
    <author><first>Sandip</first><last>Sarkar</last></author>
    <author><first>Dipankar</first><last>Das</last></author>
    <author><first>Partha</first><last>Pakray</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Shared Tasks</booktitle>
    <month>December</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>213&#8211;216</pages>
    <url>http://www.aclweb.org/anthology/I17-4036</url>
    <abstract>This paper describes the participation of the JU NITM team in IJCNLP-2017 Task
	5: "Multi-choice Question Answering in Examinations". The main aim of this
	shared task is to choose the correct option for each multi-choice question. Our
	proposed model includes vector representations as feature and machine learning
	for classification. At first we represent question and answer in vector space
	and after that find the cosine similarity between those two vectors. Finally we
	apply classification approach to find the correct answer. Our system was only
	developed for the English language, and it obtained an accuracy of 40.07% for
	test dataset and 40.06% for valid dataset.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>sarkar-das-pakray:2017:I17-4</bibkey>
  </paper>

</volume>

