Alexander Rudnicky

Also published as: A. Rudnicky, Alex Rudnicky, Alexander I. Rudnicky

Other people with similar names: Alex Rudnick


2024

pdf bib
Attention Alignment and Flexible Positional Embeddings Improve Transformer Length Extrapolation
Ta-Chung Chi | Ting-Han Fan | Alexander Rudnicky
Findings of the Association for Computational Linguistics: NAACL 2024

An ideal length-extrapolatable Transformer language model can handle sequences longer than the training length without any fine-tuning. Such long-context utilization capability relies heavily on a flexible positional embedding design. Upon investigating the flexibility of existing large pre-trained Transformer language models, we find that the T5 family deserves a closer look, as its positional embeddings capture rich and flexible attention patterns. However, T5 suffers from the dispersed attention issue: the longer the input sequence, the flatter the attention distribution. To alleviate the issue, we propose two attention alignment strategies via temperature scaling. Our findings show improvement on the long-context utilization capability of T5 on language modeling, retrieval, multi-document question answering, and code completion tasks without any fine-tuning. This suggests that a flexible positional embedding design and attention alignment can go a long way toward Transformer length extrapolation. The code is released at: https://github.com/chijames/T5-Attention-Alignment

pdf bib
Advancing Regular Language Reasoning in Linear Recurrent Neural Networks
Ting-Han Fan | Ta-Chung Chi | Alexander Rudnicky
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)

In recent studies, linear recurrent neural networks (LRNNs) have achieved Transformer-level performance in natural language and long-range modeling, while offering rapid parallel training and constant inference cost. With the resurgence of interest in LRNNs, we study whether they can learn the hidden rules in training sequences, such as the grammatical structures of regular language. We theoretically analyze some existing LRNNs and discover their limitations in modeling regular language. Motivated by this analysis, we propose a new LRNN equipped with a block-diagonal and input-dependent transition matrix. Experiments suggest that the proposed model is the only LRNN capable of performing length extrapolation on regular language tasks such as Sum, Even Pair, and Modular Arithmetic. The code is released at https://github.com/tinghanf/RegluarLRNN.

2023

pdf bib
Dissecting Transformer Length Extrapolation via the Lens of Receptive Field Analysis
Ta-Chung Chi | Ting-Han Fan | Alexander Rudnicky | Peter Ramadge
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Length extrapolation permits training a transformer language model on short sequences that preserves perplexities when tested on substantially longer sequences.A relative positional embedding design, ALiBi, has had the widest usage to date. We dissect ALiBi via the lens of receptive field analysis empowered by a novel cumulative normalized gradient tool. The concept of receptive field further allows us to modify the vanilla Sinusoidal positional embedding to create Sandwich, the first parameter-free relative positional embedding design that truly length information uses longer than the training sequence. Sandwich shares with KERPLE and T5 the same logarithmic decaying temporal bias pattern with learnable relative positional embeddings; these elucidate future extrapolatable positional embedding design.

pdf bib
Latent Positional Information is in the Self-Attention Variance of Transformer Language Models Without Positional Embeddings
Ta-Chung Chi | Ting-Han Fan | Li-Wei Chen | Alexander Rudnicky | Peter Ramadge
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

The use of positional embeddings in transformer language models is widely accepted. However, recent research has called into question the necessity of such embeddings. We further extend this inquiry by demonstrating that a randomly initialized and frozen transformer language model, devoid of positional embeddings, inherently encodes strong positional information through the shrinkage of self-attention variance. To quantify this variance, we derive the underlying distribution of each step within a transformer layer. Through empirical validation using a fully pretrained model, we show that the variance shrinkage effect still persists after extensive gradient updates. Our findings serve to justify the decision to discard positional embeddings and thus facilitate more efficient pretraining of transformer language models.

pdf bib
Transformer Working Memory Enables Regular Language Reasoning And Natural Language Length Extrapolation
Ta-Chung Chi | Ting-Han Fan | Alexander Rudnicky | Peter Ramadge
Findings of the Association for Computational Linguistics: EMNLP 2023

Unlike recurrent models, conventional wisdom has it that Transformers cannot perfectly model regular languages. Inspired by the notion of working memory, we propose a new Transformer variant named RegularGPT. With its novel combination of Weight-Sharing, Adaptive-Depth, and Sliding-Dilated-Attention, RegularGPT constructs working memory along the depth dimension, thereby enabling efficient and successful modeling of regular languages such as PARITY. We further test RegularGPT on the task of natural language length extrapolation and surprisingly find that it rediscovers the local windowed attention effect deemed necessary in prior work for length extrapolation.

pdf bib
Overview of Robust and Multilingual Automatic Evaluation Metricsfor Open-Domain Dialogue Systems at DSTC 11 Track 4
Mario Rodríguez-Cantelar | Chen Zhang | Chengguang Tang | Ke Shi | Sarik Ghazarian | João Sedoc | Luis Fernando D’Haro | Alexander I. Rudnicky
Proceedings of The Eleventh Dialog System Technology Challenge

The advent and fast development of neural networks have revolutionized the research on dialogue systems and subsequently have triggered various challenges regarding their automatic evaluation. Automatic evaluation of open-domain dialogue systems as an open challenge has been the center of the attention of many researchers. Despite the consistent efforts to improve automatic metrics’ correlations with human evaluation, there have been very few attempts to assess their robustness over multiple domains and dimensions. Also, their focus is mainly on the English language. All of these challenges prompt the development of automatic evaluation metrics that are reliable in various domains, dimensions, and languages. This track in the 11th Dialogue System Technology Challenge (DSTC11) is part of the ongoing effort to promote robust and multilingual automatic evaluation metrics. This article describes the datasets and baselines provided to participants and discusses the submission and result details of the two proposed subtasks.

2022

pdf bib
Structured Dialogue Discourse Parsing
Ta-Chung Chi | Alexander Rudnicky
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue

Dialogue discourse parsing aims to uncover the internal structure of a multi-participant conversation by finding all the discourse links and corresponding relations. Previous work either treats this task as a series of independent multiple-choice problems, in which the link existence and relations are decoded separately, or the encoding is restricted to only local interaction, ignoring the holistic structural information. In contrast, we propose a principled method that improves upon previous work from two perspectives: encoding and decoding. From the encoding side, we perform structured encoding on the adjacency matrix followed by the matrix-tree learning algorithm, where all discourse links and relations in the dialogue are jointly optimized based on latent tree-level distribution. From the decoding side, we perform structured inference using the modified Chiu-Liu-Edmonds algorithm, which explicitly generates the labeled multi-root non-projective spanning tree that best captures the discourse structure. In addition, unlike in previous work, we do not rely on hand-crafted features; this improves the model’s robustness. Experiments show that our method achieves new state-of-the-art, surpassing the previous model by 2.3 on STAC and 1.5 on Molweni (F1 scores).

pdf bib
An Empirical study to understand the Compositional Prowess of Neural Dialog Models
Vinayshekhar Kumar | Vaibhav Kumar | Mukul Bhutani | Alexander Rudnicky
Proceedings of the Third Workshop on Insights from Negative Results in NLP

In this work, we examine the problems associated with neural dialog models under the common theme of compositionality. Specifically, we investigate three manifestations of compositionality: (1) Productivity, (2) Substitutivity, and (3) Systematicity. These manifestations shed light on the generalization, syntactic robustness, and semantic capabilities of neural dialog models. We design probing experiments by perturbing the training data to study the above phenomenon. We make informative observations based on automated metrics and hope that this work increases research interest in understanding the capacity of these models.

2021

pdf bib
Zero-Shot Dialogue Disentanglement by Self-Supervised Entangled Response Selection
Ta-Chung Chi | Alexander Rudnicky
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Dialogue disentanglement aims to group utterances in a long and multi-participant dialogue into threads. This is useful for discourse analysis and downstream applications such as dialogue response selection, where it can be the first step to construct a clean context/response set. Unfortunately, labeling all reply-to links takes quadratic effort w.r.t the number of utterances: an annotator must check all preceding utterances to identify the one to which the current utterance is a reply. In this paper, we are the first to propose a zero-shot dialogue disentanglement solution. Firstly, we train a model on a multi-participant response selection dataset harvested from the web which is not annotated; we then apply the trained model to perform zero-shot dialogue disentanglement. Without any labeled data, our model can achieve a cluster F1 score of 25. We also fine-tune the model using various amounts of labeled data. Experiments show that with only 10% of the data, we achieve nearly the same performance of using the full dataset.

2020

pdf bib
Adjusting Image Attributes of Localized Regions with Low-level Dialogue
Tzu-Hsiang Lin | Alexander Rudnicky | Trung Bui | Doo Soon Kim | Jean Oh
Proceedings of the Twelfth Language Resources and Evaluation Conference

Natural Language Image Editing (NLIE) aims to use natural language instructions to edit images. Since novices are inexperienced with image editing techniques, their instructions are often ambiguous and contain high-level abstractions which require complex editing steps. Motivated by this inexperience aspect, we aim to smooth the learning curve by teaching the novices to edit images using low-level command terminologies. Towards this end, we develop a task-oriented dialogue system to investigate low-level instructions for NLIE. Our system grounds language on the level of edit operations, and suggests options for users to choose from. Though compelled to express in low-level terms, user evaluation shows that 25% of users found our system easy-to-use, resonating with our motivation. Analysis shows that users generally adapt to utilizing the proposed low-level language interface. We also identified object segmentation as the key factor to user satisfaction. Our work demonstrates advantages of low-level, direct language-action mapping approach that can be applied to other problem domains beyond image editing such as audio editing or industrial design.

2016

pdf bib
A Wizard-of-Oz Study on A Non-Task-Oriented Dialog Systems That Reacts to User Engagement
Zhou Yu | Leah Nicolich-Henkin | Alan W Black | Alexander Rudnicky
Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue

pdf bib
Strategy and Policy Learning for Non-Task-Oriented Conversational Systems
Zhou Yu | Ziyu Xu | Alan W Black | Alexander Rudnicky
Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue

pdf bib
AppDialogue: Multi-App Dialogues for Intelligent Assistants
Ming Sun | Yun-Nung Chen | Zhenhao Hua | Yulian Tamres-Rudnicky | Arnab Dash | Alexander Rudnicky
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

Users will interact with an individual app on smart devices (e.g., phone, TV, car) to fulfill a specific goal (e.g. find a photographer), but users may also pursue more complex tasks that will span multiple domains and apps (e.g. plan a wedding ceremony). Planning and executing such multi-app tasks are typically managed by users, considering the required global context awareness. To investigate how users arrange domains/apps to fulfill complex tasks in their daily life, we conducted a user study on 14 participants to collect such data from their Android smart phones. This document 1) summarizes the techniques used in the data collection and 2) provides a brief statistical description of the data. This data guilds the future direction for researchers in the fields of conversational agent and personal assistant, etc. This data is available at http://AppDialogue.com.

2015

pdf bib
Jointly Modeling Inter-Slot Relations by Random Walk on Knowledge Graphs for Unsupervised Spoken Language Understanding
Yun-Nung Chen | William Yang Wang | Alexander Rudnicky
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Miscommunication Recovery in Physically Situated Dialogue
Matthew Marge | Alexander Rudnicky
Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue

pdf bib
Matrix Factorization with Knowledge Graph Propagation for Unsupervised Spoken Language Understanding
Yun-Nung Chen | William Yang Wang | Anatole Gershman | Alexander Rudnicky
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

2014

pdf bib
Conversational Strategies for Robustly Managing Dialog in Public Spaces
Aasish Pappu | Ming Sun | Seshadri Sridharan | Alexander Rudnicky
Proceedings of the EACL 2014 Workshop on Dialogue in Motion

pdf bib
Knowledge Acquisition Strategies for Goal-Oriented Dialog Systems
Aasish Pappu | Alexander Rudnicky
Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)

pdf bib
Two-Stage Stochastic Email Synthesizer
Yun-Nung Chen | Alexander Rudnicky
Proceedings of the 8th International Natural Language Generation Conference (INLG)

pdf bib
Two-Stage Stochastic Natural Language Generation for Email Synthesis by Modeling Sender Style and Topic Structure
Yun-Nung Chen | Alexander Rudnicky
Proceedings of the 8th International Natural Language Generation Conference (INLG)

2013

pdf bib
Predicting Tasks in Goal-Oriented Spoken Dialog Systems using Semantic Knowledge Bases
Aasish Pappu | Alexander Rudnicky
Proceedings of the SIGDIAL 2013 Conference

2012

pdf bib
The Structure and Generality of Spoken Route Instructions
Aasish Pappu | Alexander Rudnicky
Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue

2010

pdf bib
Using the Amazon Mechanical Turk to Transcribe and Annotate Meeting Speech for Extractive Summarization
Matthew Marge | Satanjeev Banerjee | Alexander Rudnicky
Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk

pdf bib
Towards Improving the Naturalness of Social Conversations with Dialogue Systems
Matthew Marge | João Miranda | Alan Black | Alexander Rudnicky
Proceedings of the SIGDIAL 2010 Conference

pdf bib
Comparing Spoken Language Route Instructions for Robots across Environment Representations
Matthew Marge | Alexander Rudnicky
Proceedings of the SIGDIAL 2010 Conference

2009

pdf bib
Non-textual Event Summarization by Applying Machine Learning to Template-based Language Generation
Mohit Kumar | Dipanjan Das | Sachin Agarwal | Alexander Rudnicky
Proceedings of the 2009 Workshop on Language Generation and Summarisation (UCNLG+Sum 2009)

pdf bib
Detecting the Noteworthiness of Utterances in Human Meetings
Satanjeev Banerjee | Alexander Rudnicky
Proceedings of the SIGDIAL 2009 Conference

pdf bib
Predicting Barge-in Utterance Errors by using Implicitly-Supervised ASR Accuracy and Barge-in Rate per User
Kazunori Komatani | Alexander I. Rudnicky
Proceedings of the ACL-IJCNLP 2009 Conference Short Papers

2008

pdf bib
Mixture Pruning and Roughening for Scalable Acoustic Models
David Huggins-Daines | Alexander I. Rudnicky
Proceedings of the ACL-08: HLT Workshop on Mobile Language Processing

pdf bib
Interactive ASR Error Correction for Touchscreen Devices
David Huggins-Daines | Alexander I. Rudnicky
Proceedings of the ACL-08: HLT Demo Session

pdf bib
Automatic Extraction of Briefing Templates
Dipanjan Das | Mohit Kumar | Alexander I. Rudnicky
Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-I

pdf bib
Acquiring Domain-Specific Dialog Information from Task-Oriented Human-Human Interaction through an Unsupervised Learning
Ananlada Chotimongkol | Alexander Rudnicky
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

2007

pdf bib
Implicitly Supervised Language Model Adaptation for Meeting Transcription
David Huggins-Daines | Alexander I. Rudnicky
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers

pdf bib
Implicitly-supervised Learning in Spoken Language Interfaces: an Application to the Confidence Annotation Problem
Dan Bohus | Alexander Rudnicky
Proceedings of the 8th SIGdial Workshop on Discourse and Dialogue

pdf bib
Olympus: an open-source framework for conversational spoken language interface research
Dan Bohus | Antoine Raux | Thomas Harris | Maxine Eskenazi | Alexander Rudnicky
Proceedings of the Workshop on Bridging the Gap: Academic and Industrial Research in Dialog Technologies

2006

pdf bib
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Demonstrations
Alex Rudnicky | John Dowding | Natasa Milic-Frayling
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Demonstrations

pdf bib
SmartNotes: Implicit Labeling of Meeting Data through User Note-Taking and Browsing
Satanjeev Banerjee | Alexander I. Rudnicky
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Demonstrations

pdf bib
You Are What You Say: Using Meeting Participants’ Speech to Detect their Roles and Expertise
Satanjeev Banerjee | Alexander Rudnicky
Proceedings of the Analyzing Conversations in Text and Speech

2005

pdf bib
Error Handling in the RavenClaw Dialog Management Architecture
Dan Bohus | Alexander Rudnicky
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

pdf bib
Sorry and I Didn’t Catch That! - An Investigation of Non-understanding Errors and Recovery Strategies
Dan Bohus | Alexander I. Rudnicky
Proceedings of the 6th SIGdial Workshop on Discourse and Dialogue

2002

pdf bib
Speech Translation on a Tight Budget without Enough Data
Robert E. Frederking | Alan W. Black | Ralf D. Brown | Alexander Rudnicky | John Moody | Eric Steinbrecher
Proceedings of the ACL-02 Workshop on Speech-to-Speech Translation: Algorithms and Systems

2000

pdf bib
Stochastic Language Generation for Spoken Dialogue Systems
Alice H. Oh | Alexander I. Rudnicky
ANLP-NAACL 2000 Workshop: Conversational Systems

pdf bib
Task-based dialog management using an agenda
Wei Xu | Alexander I. Rudnicky
ANLP-NAACL 2000 Workshop: Conversational Systems

1999

pdf bib
A new approach to the translating telephone
Robert Frederking | Christopher Hogan | Alexander Rudnicky
Proceedings of Machine Translation Summit VII

The Translating Telephone has been a major goal of speech translation for many years. Previous approaches have attempted to work from limited-domain, fully-automatic translation towards broad-coverage, fully-automatic translation. We are approaching the problem from a different direction: starting with a broad-coverage but not fully-automatic system, and working towards full automation. We believe that working in this direction will provide us with better feedback, by observing users and collecting language data under realistic conditions, and thus may allow more rapid progress towards the same ultimate goal. Our initial approach relies on the wide-spread availability of Internet connections and web browsers to provide a user interface. We describe our initial work, which is an extension of the Diplomat wearable speech translator.

1997

pdf bib
Interactive Speech Translation in the DIPLOMAT Project
Robert Frederking | Alexander Rudnicky | Christopher Hogan
Spoken Language Translation

1994

pdf bib
Expanding the Scope of the ATIS Task: The ATIS-3 Corpus
Deborah A. Dahl | Madeleine Bates | Michael Brown | William Fisher | Kate Hunicke-Smith | David Pallett | Christine Pao | Alexander Rudnicky | Elizabeth Shriberg
Human Language Technology: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994

1993

pdf bib
Session 1: Spoken Language Systems
Alexander I. Rudnicky
Human Language Technology: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21-24, 1993

pdf bib
Multi-Site Data Collection and Evaluation in Spoken Language Understanding
L. Hirschman | M. Bates | D. Dahl | W. Fisher | J. Garofolo | D. Pallett | K. Hunicke-Smith | P. Price | A. Rudnicky | E. Tzoukermann
Human Language Technology: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21-24, 1993

pdf bib
Mode preference in a simple data-retrieval task
Alexander I. Rudnicky
Human Language Technology: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21-24, 1993

1990

pdf bib
A Comparison of Speech and Typed Input
Alexander G. Hauptmann | Alexander I. Rudnicky
Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27,1990

pdf bib
The design of a spoken language interface
Jean-Michel Lunati | Alexander I. Rudnicky
Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27,1990

1989

pdf bib
The design of voice-driven interfaces
Alexander I. Rudnicky
Speech and Natural Language: Proceedings of a Workshop Held at Philadelphia, Pennsylvania, February 21-23, 1989

pdf bib
Evaluating spoken language interaction
Alexander I. Rudnicky | Michelle Sakamoto | Joseph H. Polifroni
Speech and Natural Language: Proceedings of a Workshop Held at Cape Cod, Massachusetts, October 15-18, 1989