<?xml version="1.0" encoding="UTF-8" ?>
<volume id="W17">
  <paper id="0700">
    <title>Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2017)</title>
    <editor>Ted Gibson</editor>
    <editor>Tal Linzen</editor>
    <editor>Asad Sayeed</editor>
    <editor>Martin van Schijndel</editor>
    <editor>William Schuler</editor>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <url>http://www.aclweb.org/anthology/W17-07</url>
    <bibtype>book</bibtype>
    <bibkey>CMCL:2017</bibkey>
  </paper>

  <paper id="0701">
    <title>Entropy Reduction correlates with temporal lobe activity</title>
    <author><first>Matthew</first><last>Nelson</last></author>
    <author><first>Stanislas</first><last>Dehaene</last></author>
    <author><first>Christophe</first><last>Pallier</last></author>
    <author><first>John</first><last>Hale</last></author>
    <booktitle>Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2017)</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>1&#8211;10</pages>
    <url>http://www.aclweb.org/anthology/W17-0701</url>
    <abstract>Using the Entropy Reduction incremental complexity metric,
	we relate high gamma power signals from the brains of epileptic patients
	to incremental stages of syntactic analysis in English and French.
	We find that signals recorded intracranially from the anterior Inferior
	Temporal Sulcus (aITS) and
	the posterior Inferior Temporal Gyrus (pITG) correlate with word-by-word
	Entropy Reduction values
	derived from phrase structure grammars for those languages.
	In the anterior region, this correlation persists even in combination with
	surprisal co-predictors
	from PCFG and ngram models.
	The result confirms the idea that the brain's temporal lobe houses a parsing
	function,
	one whose incremental processing difficulty profile reflects changes in
	grammatical uncertainty.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>nelson-EtAl:2017:CMCL</bibkey>
  </paper>

  <paper id="0702">
    <title>Learning an Input Filter for Argument Structure Acquisition</title>
    <author><first>Laurel</first><last>Perkins</last></author>
    <author><first>Naomi</first><last>Feldman</last></author>
    <author><first>Jeffrey</first><last>Lidz</last></author>
    <booktitle>Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2017)</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>11&#8211;19</pages>
    <url>http://www.aclweb.org/anthology/W17-0702</url>
    <abstract>How do children learn a verb’s argument structure when their input contains
	nonbasic clauses that obscure verb transitivity? Here we present a new model
	that infers verb transitivity by learning to filter out non-basic clauses that
	were likely parsed in error. In simulations with child-directed speech, we show
	that this model accurately categorizes the majority of 50 frequent transitive,
	intransitive and alternating verbs, and jointly learns appropriate parameters
	for filtering parsing errors. Our model is thus able to filter out problematic
	data for verb learning without knowing in advance which data need to be
	filtered.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>perkins-feldman-lidz:2017:CMCL</bibkey>
  </paper>

  <paper id="0703">
    <title>Grounding sound change in ideal observer models of perception</title>
    <author><first>Zachary</first><last>Burchill</last></author>
    <author><first>T. Florian</first><last>Jaeger</last></author>
    <booktitle>Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2017)</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>20&#8211;28</pages>
    <url>http://www.aclweb.org/anthology/W17-0703</url>
    <abstract>An important predictor of historical sound change, functional load, fails to
	capture insights from speech perception. Building on ideal observer models of
	word recognition, we devise a new definition of functional load that
	incorporates both a priori predictability and perceptual information. We
	explore this new measure with a simple model and find that it outperforms
	traditional measures.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>burchill-jaeger:2017:CMCL</bibkey>
  </paper>

  <paper id="0704">
    <title>&#x201c;Oh, I've Heard That Before": Modelling Own-Dialect Bias After Perceptual Learning by Weighting Training Data</title>
    <author><first>Rachael</first><last>Tatman</last></author>
    <booktitle>Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2017)</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>29&#8211;34</pages>
    <url>http://www.aclweb.org/anthology/W17-0704</url>
    <abstract>Human listeners are able to quickly and robustly adapt to new accents and do so
	by using information about speaker's identities. This paper will present
	experimental evidence that, even considering information about speaker's
	identities, listeners retain a strong bias towards the acoustics of their own
	dialect after dialect learning. Participants' behaviour was accurately mimicked
	by a classifier which was trained on more cases from the base dialect and fewer
	from the target dialect. This suggests that imbalanced training data may result
	in automatic speech recognition errors consistent with those of speakers from
	populations over-represented in the training data.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>tatman:2017:CMCL</bibkey>
  </paper>

  <paper id="0705">
    <title>Inherent Biases of Recurrent Neural Networks for Phonological Assimilation and Dissimilation</title>
    <author><first>Amanda</first><last>Doucette</last></author>
    <booktitle>Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2017)</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>35&#8211;40</pages>
    <url>http://www.aclweb.org/anthology/W17-0705</url>
    <abstract>A recurrent neural network model of phonological pattern learning is proposed.
	The model is a relatively simple neural network with one recurrent layer, and
	displays biases in learning that mimic observed biases in human learning.
	Single-feature patterns are learned faster than two-feature patterns, and vowel
	or consonant-only patterns are learned faster than patterns involving vowels
	and consonants, mimicking the results of laboratory learning experiments. In
	non-recurrent models, capturing these biases requires the use of alpha features
	or some other representation of repeated features, but with a recurrent neural
	network, these elaborations are not necessary.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>doucette:2017:CMCL</bibkey>
  </paper>

  <paper id="0706">
    <title>Predicting Japanese scrambling in the wild</title>
    <author><first>Naho</first><last>Orita</last></author>
    <booktitle>Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2017)</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>41&#8211;45</pages>
    <url>http://www.aclweb.org/anthology/W17-0706</url>
    <abstract>Japanese speakers have a choice between canonical SOV and scrambled OSV word
	order to express the same meaning. Although previous experiments examine the
	influence of one or two factors for scrambling in a controlled setting, it is
	not yet known what kinds of multiple effects contribute to scrambling. This
	study uses naturally distributed data to test the multiple effects on
	scrambling simultaneously. A regression analysis replicates the NP length
	effect and suggests the influence of noun types, but it provides no evidence
	for syntactic priming, given-new ordering, and the animacy effect. These
	findings only show evidence for sentence-internal factors, but we find no
	evidence that discourse level factors play a role.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>orita:2017:CMCL</bibkey>
  </paper>

</volume>

