2025
pdf
bib
abs
Entailment-Preserving First-order Logic Representations in Natural Language Entailment
Jinu Lee
|
Qi Liu
|
Runzhi Ma
|
Vincent Han
|
Ziqi Wang
|
Heng Ji
|
Julia Hockenmaier
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
First-order logic (FOL) is often used to represent logical entailment, but determining natural language (NL) entailment using FOL remains a challenge. To address this, we propose the Entailment-Preserving FOL representations (EPF) task and introduce reference-free evaluation metrics for EPF (Entailment-Preserving Rate (EPR) family). In EPF, one should generate FOL representations from multi-premise NL entailment data (e.g., EntailmentBank) so that the automatic prover’s result preserves the entailment labels. Furthermore, we propose a training method specialized for the task, iterative learning-to-rank, which trains an NL-to-FOL translator by using the natural language entailment labels as verifiable rewards. Our method achieves a 1.8–2.7% improvement in EPR and a 17.4–20.6% increase in EPR@16 compared to diverse baselines in three datasets. Further analyses reveal that iterative learning-to-rank effectively suppresses the arbitrariness of FOL representation by reducing the diversity of predicate signatures, and maintains strong performance across diverse inference types and out-of-domain data.
pdf
bib
abs
Toward Efficient Sparse Autoencoder-Guided Steering for Improved In-Context Learning in Large Language Models
Ikhyun Cho
|
Julia Hockenmaier
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Sparse autoencoders (SAEs) have emerged as a powerful analytical tool in mechanistic interpretability for large language models (LLMs), with growing success in applications beyond interpretability. Building on this momentum, we present a novel approach that leverages SAEs to enhance the general in-context learning (ICL) performance of LLMs.Specifically, we introduce Feature Detection through Prompt Variation (FDPV), which leverages the SAE’s remarkable ability to capture subtle differences between prompts, enabling efficient feature selection for downstream steering. In addition, we propose a novel steering method tailored to ICL—Selective In-Context Steering (SISTER)—grounded in recent insights from ICL research that LLMs utilize label words as key anchors. Our method yields a 3.5% average performance improvement across diverse text classification tasks and exhibits greater robustness to hyperparameter variations compared to standard steering approaches. Our code is available at https://github.com/ihcho2/SAE-ICL.
pdf
bib
abs
The Power of Bullet Lists: A Simple Yet Effective Prompting Approach to Enhancing Spatial Reasoning in Large Language Models
Ikhyun Cho
|
Changyeon Park
|
Julia Hockenmaier
Findings of the Association for Computational Linguistics: NAACL 2025
While large language models (LLMs) are dominating the field of natural language processing, it remains an open question how well these models can perform spatial reasoning. Contrary to recent studies suggesting that LLMs struggle with spatial reasoning tasks, we demonstrate in this paper that a novel prompting technique, termed Patient Visualization of Thought (Patient-VoT), can boost LLMs’ spatial reasoning abilities. The core idea behind Patient-VoT is to explicitly integrate *bullet lists, coordinates, and visualizations* into the reasoning process. By applying Patient-VoT, we achieve a significant boost in spatial reasoning performance compared to prior prompting techniques. We also show that integrating bullet lists into reasoning is effective in planning tasks, highlighting its general effectiveness across different applications.
pdf
bib
abs
Evaluating Step-by-step Reasoning Traces: A Survey
Jinu Lee
|
Julia Hockenmaier
Findings of the Association for Computational Linguistics: EMNLP 2025
Step-by-step reasoning is widely used to enhance the reasoning ability of large language models (LLMs) in complex problems. Evaluating the quality of reasoning traces is crucial for understanding and improving LLM reasoning. However, existing evaluation practices are highly inconsistent, resulting in fragmented progress across evaluator design and benchmark development. To address this gap, this survey provides a comprehensive overview of step-by-step reasoning evaluation, proposing a taxonomy of evaluation criteria with four top-level categories (factuality, validity, coherence, and utility). Based on the taxonomy, we review different datasets, evaluator implementations, and recent findings, leading to promising directions for future research.
pdf
bib
abs
On the Versatility of Sparse Autoencoders for In-Context Learning
Ikhyun Cho
|
Gaeul Kwon
|
Julia Hockenmaier
Findings of the Association for Computational Linguistics: EMNLP 2025
Sparse autoencoders (SAEs) are emerging as a key analytical tool in the field of mechanistic interpretability for large language models (LLMs). While SAEs have primarily been used for interpretability, we shift focus and explore an understudied question: “Can SAEs be applied to practical tasks beyond interpretability?” Given that SAEs are trained on billions of tokens for sparse reconstruction, we believe they can serve as effective extractors, offering a wide range of useful knowledge that can benefit practical applications. Building on this motivation, we demonstrate that SAEs can be effectively applied to in-context learning (ICL). In particular, we highlight the utility of the SAE-reconstruction loss by showing that it provides a valuable signal in ICL—exhibiting a strong correlation with LLM performance and offering a powerful unsupervised approach for prompt selection. These findings underscore the versatility of SAEs and reveal their potential for real-world applications beyond interpretability. Our code is available at https://github.com/ihcho2/SAE-GPS.
pdf
bib
abs
Rating Roulette: Self-Inconsistency in LLM-As-A-Judge Frameworks
Rajarshi Haldar
|
Julia Hockenmaier
Findings of the Association for Computational Linguistics: EMNLP 2025
As Natural Language Generation (NLG) continues to be widely adopted, properly assessing it has become quite difficult. Lately, using large language models (LLMs) for evaluating these generations has gained traction, as they tend to align more closely with human preferences than conventional n-gram or embedding-based metrics. In our experiments, we show that LLM judges have low intra-rater reliability in their assigned scores across different runs. This variance makes their ratings inconsistent, almost arbitrary in the worst case, making it difficult to measure how good their judgments actually are. We quantify this inconsistency across different NLG tasks and benchmarks and see if judicious use of LLM judges can still be useful following proper guidelines.
2024
pdf
bib
abs
Tutor-ICL: Guiding Large Language Models for Improved In-Context Learning Performance
Ikhyun Cho
|
Gaeul Kwon
|
Julia Hockenmaier
Findings of the Association for Computational Linguistics: EMNLP 2024
There has been a growing body of work focusing on the in-context learning (ICL) abilities of large language models (LLMs). However, it is an open question how effective ICL can be. This paper presents Tutor-ICL, a simple prompting method for classification tasks inspired by how effective instructors might engage their students in learning a task. Specifically, we propose presenting exemplar answers in a *comparative format* rather than the traditional single-answer format. We also show that including the test instance before the exemplars can improve performance, making it easier for LLMs to focus on relevant exemplars. Lastly, we include a summarization step before attempting the test, following a common human practice. Experiments on various classification tasks, conducted across both decoder-only LLMs (Llama 2, 3) and encoder-decoder LLMs (Flan-T5-XL, XXL), show that Tutor-ICL consistently boosts performance, achieving up to a 13.76% increase in accuracy.
pdf
bib
abs
Analyzing the Performance of Large Language Models on Code Summarization
Rajarshi Haldar
|
Julia Hockenmaier
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Large language models (LLMs) such as Llama 2 perform very well on tasks that involve both natural language and source code, particularly code summarization and code generation. We show that for the task of code summarization, the performance of these models on individual examples often depends on the amount of (subword) token overlap between the code and the corresponding reference natural language descriptions in the dataset. This token overlap arises because the reference descriptions in standard datasets (corresponding to docstrings in large code bases) are often highly similar to the names of the functions they describe. We also show that this token overlap occurs largely in the function names of the code and compare the relative performance of these models after removing function names versus removing code structure. We also show that using multiple evaluation metrics like BLEU and BERTScore gives us very little additional insight since these metrics are highly correlated with each other.
2023
pdf
bib
abs
Multimedia Generative Script Learning for Task Planning
Qingyun Wang
|
Manling Li
|
Hou Pong Chan
|
Lifu Huang
|
Julia Hockenmaier
|
Girish Chowdhary
|
Heng Ji
Findings of the Association for Computational Linguistics: ACL 2023
Goal-oriented generative script learning aims to generate subsequent steps to reach a particular goal, which is an essential task to assist robots or humans in performing stereotypical activities. An important aspect of this process is the ability to capture historical states visually, which provides detailed information that is not covered by text and will guide subsequent steps. Therefore, we propose a new task, Multimedia Generative Script Learning, to generate subsequent steps by tracking historical states in both text and vision modalities, as well as presenting the first benchmark containing 5,652 tasks and 79,089 multimedia steps. This task is challenging in three aspects: the multimedia challenge of capturing the visual states in images, the induction challenge of performing unseen tasks, and the diversity challenge of covering different information in individual steps. We propose to encode visual state changes through a selective multimedia encoder to address the multimedia challenge, transfer knowledge from previously observed tasks using a retrieval-augmented decoder to overcome the induction challenge, and further present distinct information at each step by optimizing a diversity-oriented contrastive learning objective. We define metrics to evaluate both generation and inductive quality. Experiment results demonstrate that our approach significantly outperforms strong baselines.
pdf
bib
abs
A Framework for Bidirectional Decoding: Case Study in Morphological Inflection
Marc Canby
|
Julia Hockenmaier
Findings of the Association for Computational Linguistics: EMNLP 2023
Transformer-based encoder-decoder models that generate outputs in a left-to-right fashion have become standard for sequence-to-sequence tasks. In this paper, we propose a framework for decoding that produces sequences from the “outside-in”: at each step, the model chooses to generate a token on the left, on the right, or join the left and right sequences. We argue that this is more principled than prior bidirectional decoders. Our proposal supports a variety of model architectures and includes several training methods, such as a dynamic programming algorithm that marginalizes out the latent ordering variable. Our model sets state-of-the-art (SOTA) on the 2022 and 2023 shared tasks, beating the next best systems by over 4.7 and 2.7 points in average accuracy respectively. The model performs particularly well on long sequences, can implicitly learn the split point of words composed of stem and affix, and performs better relative to the baseline on datasets that have fewer unique lemmas.
pdf
bib
abs
SIR-ABSC: Incorporating Syntax into RoBERTa-based Sentiment Analysis Models with a Special Aggregator Token
Ikhyun Cho
|
Yoonhwa Jung
|
Julia Hockenmaier
Findings of the Association for Computational Linguistics: EMNLP 2023
We present a simple, but effective method to incorporate syntactic dependency information directly into transformer-based language models (e.g. RoBERTa) for tasks such as Aspect-Based Sentiment Classification (ABSC), where the desired output depends on specific input tokens. In contrast to prior approaches to ABSC that capture syntax by combining language models with graph neural networks over dependency trees, our model, Syntax-Integrated RoBERTa for ABSC (SIR-ABSC) incorporates syntax directly into the language model by using a novel aggregator token. Yet, SIR-ABSC outperforms these more complex models, yielding new state-of-the-art results on ABSC.
2021
pdf
bib
HySPA: Hybrid Span Generation for Scalable Text-to-Graph Extraction
Liliang Ren
|
Chenkai Sun
|
Heng Ji
|
Julia Hockenmaier
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
2020
pdf
bib
abs
Learning to execute instructions in a Minecraft dialogue
Prashant Jayannavar
|
Anjali Narayan-Chen
|
Julia Hockenmaier
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
The Minecraft Collaborative Building Task is a two-player game in which an Architect (A) instructs a Builder (B) to construct a target structure in a simulated Blocks World Environment. We define the subtask of predicting correct action sequences (block placements and removals) in a given game context, and show that capturing B’s past actions as well as B’s perspective leads to a significant improvement in performance on this challenging language understanding problem.
pdf
bib
abs
A Multi-Perspective Architecture for Semantic Code Search
Rajarshi Haldar
|
Lingfei Wu
|
JinJun Xiong
|
Julia Hockenmaier
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
The ability to match pieces of code to their corresponding natural language descriptions and vice versa is fundamental for natural language search interfaces to software repositories. In this paper, we propose a novel multi-perspective cross-lingual neural framework for code–text matching, inspired in part by a previous model for monolingual text-to-text matching, to capture both global and local similarities. Our experiments on the CoNaLa dataset show that our proposed model yields better performance on this cross-lingual text-to-code matching task than previous approaches that map code and text to a single joint embedding space.
pdf
bib
abs
University of Illinois Submission to the SIGMORPHON 2020 Shared Task 0: Typologically Diverse Morphological Inflection
Marc Canby
|
Aidana Karipbayeva
|
Bryan Lunt
|
Sahand Mozaffari
|
Charlotte Yoder
|
Julia Hockenmaier
Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology
The objective of this shared task is to produce an inflected form of a word, given its lemma and a set of tags describing the attributes of the desired form. In this paper, we describe a transformer-based model that uses a bidirectional decoder to perform this task, and evaluate its performance on the 90 languages and 18 language families used in this task.
2019
pdf
bib
abs
Phrase Grounding by Soft-Label Chain Conditional Random Field
Jiacheng Liu
|
Julia Hockenmaier
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
The phrase grounding task aims to ground each entity mention in a given caption of an image to a corresponding region in that image. Although there are clear dependencies between how different mentions of the same caption should be grounded, previous structured prediction methods that aim to capture such dependencies need to resort to approximate inference or non-differentiable losses. In this paper, we formulate phrase grounding as a sequence labeling task where we treat candidate regions as potential labels, and use neural chain Conditional Random Fields (CRFs) to model dependencies among regions for adjacent mentions. In contrast to standard sequence labeling tasks, the phrase grounding task is defined such that there may be multiple correct candidate regions. To address this multiplicity of gold labels, we define so-called Soft-Label Chain CRFs, and present an algorithm that enables convenient end-to-end training. Our method establishes a new state-of-the-art on phrase grounding on the Flickr30k Entities dataset. Analysis shows that our model benefits both from the entity dependencies captured by the CRF and from the soft-label training regime. Our code is available at
github.com/liujch1998/SoftLabelCCRFpdf
bib
abs
Collaborative Dialogue in Minecraft
Anjali Narayan-Chen
|
Prashant Jayannavar
|
Julia Hockenmaier
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
We wish to develop interactive agents that can communicate with humans to collaboratively solve tasks in grounded scenarios. Since computer games allow us to simulate such tasks without the need for physical robots, we define a Minecraft-based collaborative building task in which one player (A, the Architect) is shown a target structure and needs to instruct the other player (B, the Builder) to build this structure. Both players interact via a chat interface. A can observe B but cannot place blocks. We present the Minecraft Dialogue Corpus, a collection of 509 conversations and game logs. As a first step towards our goal of developing fully interactive agents for this task, we consider the subtask of Architect utterance generation, and show how challenging it is.
2018
pdf
bib
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Ellen Riloff
|
David Chiang
|
Julia Hockenmaier
|
Jun’ichi Tsujii
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
2017
pdf
bib
abs
Learning to Predict Denotational Probabilities For Modeling Entailment
Alice Lai
|
Julia Hockenmaier
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
We propose a framework that captures the denotational probabilities of words and phrases by embedding them in a vector space, and present a method to induce such an embedding from a dataset of denotational probabilities. We show that our model successfully predicts denotational probabilities for unseen phrases, and that its predictions are useful for textual entailment datasets such as SICK and SNLI.
pdf
bib
abs
Natural Language Inference from Multiple Premises
Alice Lai
|
Yonatan Bisk
|
Julia Hockenmaier
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
We define a novel textual entailment task that requires inference over multiple premise sentences. We present a new dataset for this task that minimizes trivial lexical inferences, emphasizes knowledge of everyday events, and presents a more challenging setting for textual entailment. We evaluate several strong neural baselines and analyze how the multiple premise task differs from standard textual entailment.
pdf
bib
abs
Towards Problem Solving Agents that Communicate and Learn
Anjali Narayan-Chen
|
Colin Graber
|
Mayukh Das
|
Md Rakibul Islam
|
Soham Dan
|
Sriraam Natarajan
|
Janardhan Rao Doppa
|
Julia Hockenmaier
|
Martha Palmer
|
Dan Roth
Proceedings of the First Workshop on Language Grounding for Robotics
Agents that communicate back and forth with humans to help them execute non-linguistic tasks are a long sought goal of AI. These agents need to translate between utterances and actionable meaning representations that can be interpreted by task-specific problem solvers in a context-dependent manner. They should also be able to learn such actionable interpretations for new predicates on the fly. We define an agent architecture for this scenario and present a series of experiments in the Blocks World domain that illustrate how our architecture supports language learning and problem solving in this domain.
2016
pdf
bib
Evaluating Induced CCG Parsers on Grounded Semantic Parsing
Yonatan Bisk
|
Siva Reddy
|
John Blitzer
|
Julia Hockenmaier
|
Mark Steedman
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
pdf
bib
Focused Evaluation for Image Description with Binary Forced-Choice Tasks
Micah Hodosh
|
Julia Hockenmaier
Proceedings of the 5th Workshop on Vision and Language
2015
pdf
bib
Probing the Linguistic Strengths and Limitations of Unsupervised Grammar Induction
Yonatan Bisk
|
Julia Hockenmaier
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
pdf
bib
Labeled Grammar Induction with Minimal Supervision
Yonatan Bisk
|
Christos Christodoulopoulos
|
Julia Hockenmaier
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
2014
pdf
bib
abs
From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions
Peter Young
|
Alice Lai
|
Micah Hodosh
|
Julia Hockenmaier
Transactions of the Association for Computational Linguistics, Volume 2
We propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics, which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. To compute these denotational similarities, we construct a denotation graph, i.e. a subsumption hierarchy over constituents and their denotations, based on a large corpus of 30K images and 150K descriptive captions.
pdf
bib
Illinois-LH: A Denotational and Distributional Approach to Semantics
Alice Lai
|
Julia Hockenmaier
Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)
2013
pdf
bib
Proceedings of the 2013 NAACL HLT Student Research Workshop
Annie Louis
|
Richard Socher
|
Julia Hockenmaier
|
Eric K. Ringger
Proceedings of the 2013 NAACL HLT Student Research Workshop
pdf
bib
abs
An HDP Model for Inducing Combinatory Categorial Grammars
Yonatan Bisk
|
Julia Hockenmaier
Transactions of the Association for Computational Linguistics, Volume 1
We introduce a novel nonparametric Bayesian model for the induction of Combinatory Categorial Grammars from POS-tagged text. It achieves state of the art performance on a number of languages, and induces linguistically plausible lexicons.
pdf
bib
Proceedings of the Workshop on Vision and Natural Language Processing
Julia Hockenmaier
|
Tamara Berg
Proceedings of the Workshop on Vision and Natural Language Processing
pdf
bib
Proceedings of the Seventeenth Conference on Computational Natural Language Learning
Julia Hockenmaier
|
Sebastian Riedel
Proceedings of the Seventeenth Conference on Computational Natural Language Learning
2012
pdf
bib
Beefmoves: Dissemination, Diversity, and Dynamics of English Borrowings in a German Hip Hop Forum
Matt Garley
|
Julia Hockenmaier
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
pdf
bib
Induction of Linguistic Structure with Combinatory Categorial Grammars
Yonatan Bisk
|
Julia Hockenmaier
Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure
2010
pdf
bib
Normal-form parsing for Combinatory Categorial Grammars with generalized composition and type-raising
Julia Hockenmaier
|
Yonatan Bisk
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)
pdf
bib
Shallow Information Extraction from Medical Forum Data
Parikshit Sondhi
|
Manish Gupta
|
ChengXiang Zhai
|
Julia Hockenmaier
Coling 2010: Posters
pdf
bib
Citation Author Topic Model in Expert Search
Yuancheng Tu
|
Nikhil Johri
|
Dan Roth
|
Julia Hockenmaier
Coling 2010: Posters
pdf
bib
Proceedings of the NAACL HLT 2010 Student Research Workshop
Julia Hockenmaier
|
Diane Litman
|
Adriane Boyd
|
Mahesh Joshi
|
Frank Rudzicz
Proceedings of the NAACL HLT 2010 Student Research Workshop
pdf
bib
Wide-Coverage NLP with Linguistically Expressive Grammars
Julia Hockenmaier
|
Yusuke Miyao
|
Josef van Genabith
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts
pdf
bib
Collecting Image Annotations Using Amazon’s Mechanical Turk
Cyrus Rashtchian
|
Peter Young
|
Micah Hodosh
|
Julia Hockenmaier
Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk
pdf
bib
Cross-Caption Coreference Resolution for Automatic Image Understanding
Micah Hodosh
|
Peter Young
|
Cyrus Rashtchian
|
Julia Hockenmaier
Proceedings of the Fourteenth Conference on Computational Natural Language Learning
pdf
bib
The Future Role of Language Resources for Natural Language Parsing (We Won’t Be Able to Rely on Pierre Vinken Rorever... or Will We Have to?)
Julia Hockenmaier
Proceedings of the 24th Pacific Asia Conference on Language, Information and Computation
2008
pdf
bib
Coling 2008: Proceedings of the workshop on Cross-Framework and Cross-Domain Parser Evaluation
Johan Bos
|
Edward Briscoe
|
Aoife Cahill
|
John Carroll
|
Stephen Clark
|
Ann Copestake
|
Dan Flickinger
|
Josef van Genabith
|
Julia Hockenmaier
|
Aravind Joshi
|
Ronald Kaplan
|
Tracy Holloway King
|
Sandra Kuebler
|
Dekang Lin
|
Jan Tore Lønning
|
Christopher Manning
|
Yusuke Miyao
|
Joakim Nivre
|
Stephan Oepen
|
Kenji Sagae
|
Nianwen Xue
|
Yi Zhang
Coling 2008: Proceedings of the workshop on Cross-Framework and Cross-Domain Parser Evaluation
pdf
bib
Non-local scrambling: the equivalence of TAG and CCG revisited
Julia Hockenmaier
|
Peter Young
Proceedings of the Ninth International Workshop on Tree Adjoining Grammar and Related Frameworks (TAG+9)
2007
pdf
bib
CCGbank: A Corpus of CCG Derivations and Dependency Structures Extracted from the Penn Treebank
Julia Hockenmaier
|
Mark Steedman
Computational Linguistics, Volume 33, Number 3, September 2007
pdf
bib
ACL 2007 Workshop on Deep Linguistic Processing
Timothy Baldwin
|
Mark Dras
|
Julia Hockenmaier
|
Tracy Holloway King
|
Gertjan van Noord
ACL 2007 Workshop on Deep Linguistic Processing
pdf
bib
The Impact of Deep Linguistic Processing on Parsing Technology
Timothy Baldwin
|
Mark Dras
|
Julia Hockenmaier
|
Tracy Holloway King
|
Gertjan van Noord
Proceedings of the Tenth International Conference on Parsing Technologies
2006
pdf
bib
Creating a CCGbank and a Wide-Coverage CCG Lexicon for German
Julia Hockenmaier
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics
pdf
bib
Protein folding and chart parsing
Julia Hockenmaier
|
Aravind K. Joshi
|
Ken A. Dill
Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
pdf
bib
Priming Effects in Combinatory Categorial Grammar
David Reitter
|
Julia Hockenmaier
|
Frank Keller
Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
2004
pdf
bib
Wide-Coverage Semantic Representations from a CCG Parser
Johan Bos
|
Stephen Clark
|
Mark Steedman
|
James R. Curran
|
Julia Hockenmaier
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics
2003
pdf
bib
Bootstrapping statistical parsers from small datasets
Mark Steedman
|
Miles Osborne
|
Anoop Sarkar
|
Stephen Clark
|
Rebecca Hwa
|
Julia Hockenmaier
|
Paul Ruhlen
|
Steven Baker
|
Jeremiah Crim
10th Conference of the European Chapter of the Association for Computational Linguistics
pdf
bib
Example Selection for Bootstrapping Statistical Parsers
Mark Steedman
|
Rebecca Hwa
|
Stephen Clark
|
Miles Osborne
|
Anoop Sarkar
|
Julia Hockenmaier
|
Paul Ruhlen
|
Steven Baker
|
Jeremiah Crim
Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics
pdf
bib
Parsing with Generative Models of Predicate-Argument Structure
Julia Hockenmaier
Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics
pdf
bib
Identifying Semantic Roles Using Combinatory Categorial Grammar
Daniel Gildea
|
Julia Hockenmaier
Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing
2002
pdf
bib
Acquiring Compact Lexicalized Grammars from a Cleaner Treebank
Julia Hockenmaier
|
Mark Steedman
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)
pdf
bib
Building Deep Dependency Structures using a Wide-Coverage CCG Parser
Stephen Clark
|
Julia Hockenmaier
|
Mark Steedman
Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics
pdf
bib
Generative Models for Statistical Parsing with Combinatory Categorial Grammar
Julia Hockenmaier
|
Mark Steedman
Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics
1998
pdf
bib
Error-Driven Learning of Chinese Word Segmentation
Julia Hockenmaier
|
Chris Brew
Proceedings of the 12th Pacific Asia Conference on Language, Information and Computation