<?xml version="1.0" encoding="UTF-8" ?>
<volume id="I17">
  <paper id="5000">
    <title>Proceedings of the IJCNLP 2017, Tutorial Abstracts</title>
    <editor>Sadao Kurohashi</editor>
    <editor>Michael Strube</editor>
    <month>November</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <url>http://www.aclweb.org/anthology/I17-5</url>
    <bibtype>book</bibtype>
    <bibkey>I17-5:2017</bibkey>
  </paper>

  <paper id="5001">
    <title>Deep Learning in Lexical Analysis and Parsing</title>
    <author><first>Wanxiang</first><last>Che</last></author>
    <author><first>Yue</first><last>Zhang</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Tutorial Abstracts</booktitle>
    <month>November</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>1&#8211;2</pages>
    <url>http://www.aclweb.org/anthology/I17-5001</url>
    <abstract>Neural networks, also with a fancy name deep learning, just right can overcome
	the above "feature engineering" problem. In theory, they can use non-linear
	activation functions and multiple layers to automatically find useful features.
	The novel network structures, such as convolutional or recurrent, help to
	reduce the difficulty further. 
	 
	These deep learning models have been successfully used for lexical analysis and
	parsing. In this tutorial, we will give a review of each line of work, by
	contrasting them with traditional statistical methods, and organizing them in
	consistent orders.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>che-zhang:2017:I17-5</bibkey>
  </paper>

  <paper id="5002">
    <title>Multilingual Vector Representations of Words, Sentences, and Documents</title>
    <author><first>Gerard</first><last>de Melo</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Tutorial Abstracts</booktitle>
    <month>November</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>3&#8211;5</pages>
    <url>http://www.aclweb.org/anthology/I17-5002</url>
    <abstract>Neural vector representations are now ubiquitous in all subfields of natural
	language processing and text mining. While methods such as word2vec and GloVe
	are well-known, this tutorial focuses on multilingual and cross-lingual vector
	representations, of words, but also of sentences and documents as well.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>demelo:2017:I17-5</bibkey>
  </paper>

  <paper id="5003">
    <title>Open-Domain Neural Dialogue Systems</title>
    <author><first>Yun-Nung</first><last>Chen</last></author>
    <author><first>Jianfeng</first><last>Gao</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Tutorial Abstracts</booktitle>
    <month>November</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>6&#8211;10</pages>
    <url>http://www.aclweb.org/anthology/I17-5003</url>
    <abstract>In the past decade, spoken dialogue systems have been the most prominent
	component in today's personal assistants.
	A lot of devices have incorporated dialogue system modules, which allow users
	to speak naturally in order to finish tasks more efficiently.
	The traditional conversational systems have rather complex and/or modular
	pipelines.
	The advance of deep learning technologies has recently risen the applications
	of neural models to dialogue modeling.
	Nevertheless, applying deep learning technologies for building robust and
	scalable dialogue systems is still a challenging task and an open research area
	as it requires deeper understanding of the classic pipelines as well as
	detailed knowledge on the benchmark of the models of the prior work and the
	recent state-of-the-art work.
	Therefore, this tutorial is designed to focus on an overview of the dialogue
	system development while  describing most recent research for building
	task-oriented and chit-chat dialogue systems, and summarizing the challenges.
	We target the audience of students and practitioners who have some deep
	learning background, who want to get more familiar with conversational dialogue
	systems.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>chen-gao:2017:I17-5</bibkey>
  </paper>

  <paper id="5004">
    <title>Neural Machine Translation: Basics, Practical Aspects and Recent Trends</title>
    <author><first>Fabien</first><last>Cromieres</last></author>
    <author><first>Toshiaki</first><last>Nakazawa</last></author>
    <author><first>Raj</first><last>Dabre</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Tutorial Abstracts</booktitle>
    <month>November</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>11&#8211;13</pages>
    <url>http://www.aclweb.org/anthology/I17-5004</url>
    <abstract>Machine Translation (MT) is a sub-field of NLP which has experienced a
	number of paradigm shifts since its inception. Up until 2014, Phrase
	Based Statistical Machine Translation (PBSMT) approaches used to be
	the state of the art. In late 2014, Neural Machine Translation (NMT)
	was introduced and was proven to outperform all PBSMT approaches by a
	significant margin. Since then, the NMT approaches have undergone
	several transformations which have pushed the state of the art even
	further.
	This tutorial is primarily aimed at researchers who are either
	interested in or are fairly new to the world of NMT and want to obtain
	a deep understanding of NMT fundamentals. Because it will also cover
	the latest developments in NMT, it should also be useful to attendees
	with some experience in NMT.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>cromieres-nakazawa-dabre:2017:I17-5</bibkey>
  </paper>

  <paper id="5005">
    <title>The Ultimate Presentation Makeup Tutorial: How to Polish your Posters, Slides and Presentations Skills</title>
    <author><first>Gustavo</first><last>Paetzold</last></author>
    <author><first>Lucia</first><last>Specia</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Tutorial Abstracts</booktitle>
    <month>November</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>14&#8211;15</pages>
    <url>http://www.aclweb.org/anthology/I17-5005</url>
    <abstract>There is no question that our research community have, and still has been
	producing an insurmountable amount of interesting strategies, models and
	tools to a wide array of problems and challenges in diverse areas of
	knowledge. But for as long as interesting work has existed, we’ve been
	plagued by a great unsolved mystery: how come there is so much interesting
	work being published in conferences, but not as many interesting and
	engaging posters and presentations being featured in them? In this
	tutorial, we present practical step-by-step makeup solutions for poster,
	slides and oral presentations in order to help researchers who feel like
	they are not able to convey the importance of their research to the
	community in conferences.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>paetzold-specia:2017:I17-5</bibkey>
  </paper>

  <paper id="5006">
    <title>The Challenge of Composition in Distributional and Formal Semantics</title>
    <author><first>Ran</first><last>Tian</last></author>
    <author><first>Koji</first><last>Mineshima</last></author>
    <author><first>Pascual</first><last>Mart&#237;nez-G&#243;mez</last></author>
    <booktitle>Proceedings of the IJCNLP 2017, Tutorial Abstracts</booktitle>
    <month>November</month>
    <year>2017</year>
    <address>Taipei, Taiwan</address>
    <publisher>Asian Federation of Natural Language Processing</publisher>
    <pages>16&#8211;17</pages>
    <url>http://www.aclweb.org/anthology/I17-5006</url>
    <abstract>This is tutorial proposal. Abstract is as follows:
	The principle of compositionality states that the meaning of a complete
	sentence must be explained in terms of the meanings of its subsentential parts;
	in other words, each syntactic operation should have a corresponding semantic
	operation. In recent years, it has been increasingly evident that
	distributional and formal semantics are complementary in addressing
	composition; while the distributional/vector-based approach can naturally
	measure semantic similarity (Mitchell and Lapata, 2010), the formal/symbolic
	approach has a long tradition within logic-based semantic frameworks (Montague,
	1974) and can readily be connected to theorem provers or databases to perform
	complicated tasks. In this tutorial, we will cover recent efforts in extending
	word vectors to account for composition and reasoning, the various challenging
	phenomena observed in composition and addressed by formal semantics, and a
	hybrid approach that combines merits of the two.
	Outline and introduction to instructors are found in the submission.
	Ran Tian has taught a tutorial at the Annual Meeting of the Association for
	Natural Language Processing in Japan, 2015. The estimated audience size was
	about one hundred. Only a limited part of the contents in this tutorial is
	drawn from the previous one.
	Koji Mineshima has taught a one-week course at the 28th European Summer School
	in Logic, Language and Information (ESSLLI2016), together with Prof. Daisuke
	Bekki. Only a few contents are the same with this tutorial.
	Tutorials on "CCG Semantic Parsing" have been given in ACL2013, EMNLP2014, and
	AAAI2015. A coming tutorial on "Deep Learning for Semantic Composition" will be
	given in ACL2017. Contents in these tutorials are somehow related to but not
	overlapping with our proposal.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>tian-mineshima-martinezgomez:2017:I17-5</bibkey>
  </paper>

</volume>

