Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 5: Tutorial Abstracts)

Luis Chiruzzo, Hung-yi Lee, Leonardo Ribeiro (Editors)


Anthology ID:
2024.acl-tutorials
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
URL:
https://aclanthology.org/2024.acl-tutorials
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
https://aclanthology.org/2024.acl-tutorials.pdf

pdf bib
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 5: Tutorial Abstracts)
Luis Chiruzzo | Hung-yi Lee | Leonardo Ribeiro

pdf bib
Computational Linguistics for Brain Encoding and Decoding: Principles, Practices and Beyond
Jingyuan Sun | Shaonan Wang | Zijiao Chen | Jixing Li | Marie-Francine Moens

Computational linguistics (CL) has witnessed tremendous advancementsin recent years, with models such as large language models demonstratingexceptional performance in various natural language processing tasks. Theseadvancements highlight their potential to help understand brain languageprocessing, especially through the lens of brain encoding and decoding.Brain encoding involves the mapping of linguistic stimuli to brain activity,while brain decoding is the process of reconstructing linguistic stimulifrom observed brain activities. CL models that excel at capturing andmanipulating linguistic features are crucial for mapping linguistic stimulito brain activities and vice versa. Brain encoding and decoding have vastapplications, from enhancing human-computer interaction to developingassistive technologies for individuals with communication impairments. Thistutorial will focus on elucidating how computational linguistics canfacilitate brain encoding and decoding. We will delve into the principlesand practices of using computational linguistics methods for brain encodingand decoding. We will also discuss the challenges and future directions ofbrain encoding and decoding. Through this tutorial, we aim to provide acomprehensive and informative overview of the intersection betweencomputational linguistics and cognitive neuroscience, inspiring futureresearch in this exciting and rapidly evolving field.

pdf bib
Automatic and Human-AI Interactive Text Generation (with a focus on Text Simplification and Revision)
Yao Dou | Philippe Laban | Claire Gardent | Wei Xu

In this tutorial, we focus on text-to-text generation, a class ofnatural language generation (NLG) tasks, that takes a piece of text as inputand then generates a revision that is improved according to some specificcriteria (e.g., readability or linguistic styles), while largely retainingthe original meaning and the length of the text. This includes many usefulapplications, such as text simplification, paraphrase generation, styletransfer, etc. In contrast to text summarization and open-ended textcompletion (e.g., story), the text-to-text generation tasks we discuss inthis tutorial are more constrained in terms of semantic consistency andtargeted language styles. This level of control makes these tasks idealtestbeds for studying the ability of models to generate text that is bothsemantically adequate and stylistically appropriate. Moreover, these tasksare interesting from a technical standpoint, as they require complexcombinations of lexical and syntactical transformations, stylistic control,and adherence to factual knowledge, – all at once. With a special focus ontext simplification and revision, this tutorial aims to provide an overviewof the state-of-the-art natural language generation research from four majoraspects – Data, Models, Human-AI Collaboration, and Evaluation – and todiscuss and showcase a few significant and recent advances: (1) the use ofnon-retrogressive approaches; (2) the shift from fine-tuning to promptingwith large language models; (3) the development of new learnable metric andfine-grained human evaluation framework; (4) a growing body of studies anddatasets on non-English languages; (5) the rise of HCI+NLP+Accessibilityinterdisciplinary research to create real-world writing assistant systems.

pdf bib
Computational Expressivity of Neural Language Models
Alexandra Butoi | Ryan Cotterell | Anej Svete

Language models (LMs) are currently at the forefront of NLP researchdue to their remarkable versatility across diverse tasks. However, a largegap exists between their observed capabilities and the explanations proposedby established formal machinery. To motivate a better theoreticalcharacterization of LMs’ abilities and limitations, this tutorial aims toprovide a comprehensive introduction to a specific framework for formalanalysis of modern LMs using tools from formal language theory (FLT). Wepresent how tools from FLT can be useful in understanding the inner workingsand predicting the capabilities of modern neural LM architectures. We willcover recent results using FLT to make precise and practically relevantstatements about LMs based on recurrent neural networks and transformers byrelating them to formal devices such as finite-state automata, Turingmachines, and analog circuits. Altogether, the results covered in thistutorial will allow us to make precise statements and explanations about theobserved as well as predicted behaviors of LMs, as well as providetheoretically motivated suggestions on the aspects of the architectures thatcould be improved.

pdf bib
Presentation Matters: How to Communicate Science in the NLP Venues and in the Wild?
Sarvnaz Karimi | Cecile Paris | Gholamreza Haffari

Each year a large number of early career researchers join the NLP/Computational Linguistics community, with most starting by presenting their research in the *ACL conferences and workshops. While writing a paper that has made it to these venues is one important step, what comes with communicating the outcome is equally important and sets the path to impact of a research outcome. In addition, not all PhD candidates get the chance of being trained for their presentation skills. Research methods courses are not all of the same quality and may not cover scientific communications, and certainly not all are tailored to the NLP community. We are proposing an introductory tutorial that covers a range of different communication skills, including writing, oral presentation (posters and demos), and social media presence. This is to fill in the gap for the researchers who may not have access to research methods courses or other mentors who could help them acquire such skills. The interactive nature of such a tutorial would allow attendees to ask questions and clarifications which would not be possible from reading materials alone.

pdf bib
Vulnerabilities of Large Language Models to Adversarial Attacks
Yu Fu | Erfan Shayegan | Md. Mamun Al Abdullah | Pedram Zaree | Nael Abu-Ghazaleh | Yue Dong

This tutorial serves as a comprehensive guide on the vulnerabilities of Large Language Models (LLMs) to adversarial attacks, an interdisciplinary field that blends perspectives from Natural Language Processing (NLP) and Cybersecurity. As LLMs become more complex and integrated into various systems, understanding their security attributes is crucial. However, current research indicates that even safety-aligned models are not impervious to adversarial attacks that can result in incorrect or harmful outputs. The tutorial first lays the foundation by explaining safety-aligned LLMs and concepts in cybersecurity. It then categorizes existing research based on different types of learning architectures and attack methods. We highlight the existing vulnerabilities of unimodal LLMs, multi-modal LLMs, and systems that integrate LLMs, focusing on adversarial attacks designed to exploit weaknesses and mislead AI systems. Finally, the tutorial delves into the potential causes of these vulnerabilities and discusses potential defense mechanisms.

pdf bib
Detecting Machine-Generated Text: Techniques and Challenges
Li Gao | Wenhan Xiong | Taewoo Kim

As AI-generated text increasingly resembles human-written content, the ability to detect machine-generated text becomes crucial in many applications. This tutorial aims to provide a comprehensive overview of text detection techniques, focusing on machine-generated text and deepfakes. We will discuss various methods for distinguishing between human-written and machine-generated text, including statistical methods, neural network-based techniques, and hybrid approaches. The tutorial will also cover the challenges in the detection process, such as dealing with evolving models and maintaining robustness against adversarial attacks. By the end of the session, attendees will have a solid understanding of current techniques and future directions in the field of text detection.