Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts


Anthology ID:
2024.emnlp-tutorials
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
URL:
https://aclanthology.org/2024.emnlp-tutorials
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
https://aclanthology.org/2024.emnlp-tutorials.pdf

pdf bib
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts

pdf bib
Enhancing LLM Capabilities Beyond Scaling Up
Wenpeng Yin | Muhao Chen | Rui Zhang | Ben Zhou | Fei Wang | Dan Roth

General-purpose large language models (LLMs) are progressively expanding both in scale and access to unpublic training data. This has led to notable progress in a variety of AI problems. Nevertheless, two questions exist: i) Is scaling up the sole avenue of extending the capabilities of LLMs? ii) Instead of developing general-purpose LLMs, how to endow LLMs with specific knowledge? This tutorial targets researchers and practitioners who are interested in capability extension of LLMs that go beyond scaling up. To this end, we will discuss several lines of research that follow that direction, including (i) the adaptation of LLMs to assimilate new information in situations where conflicts arise, (ii) the adaptation of LLMs to address target problems with inherent constraints, (iii) the customization of LLMs to align with user-specific instructions and preference, (iv) the defense against potential attacks and threads by malicious users, and (v) the collaboration with external models directly or through APIs. At last, we will conclude the tutorial by outlining directions for further investigation.

pdf bib
Countering Hateful and Offensive Speech Online - Open Challenges
Leon Derczynski | Marco Guerini | Debora Nozza | Flor Miriam Plaza-del-Arco | Jeffrey Sorensen | Marcos Zampieri

In today’s digital age, hate speech and offensive speech online pose a significant challenge to maintaining respectful and inclusive online environments. This tutorial aims to provide attendees with a comprehensive understanding of the field by delving into essential dimensions such as multilingualism, counter-narrative generation, a hands-on session with one of the most popular APIs for detecting hate speech, fairness, and ethics in AI, and the use of recent advanced approaches. In addition, the tutorial aims to foster collaboration and inspire participants to create safer online spaces by detecting and mitigating hate speech.

pdf bib
Language Agents: Foundations, Prospects, and Risks
Yu Su | Diyi Yang | Shunyu Yao | Tao Yu

Language agents are autonomous agents, usually powered by large language models, that can follow language instructions to carry out diverse and complex tasks in real-world or simulated environments. It is one of the most heated discussion threads in AI and NLP at present with many proof-of-concept efforts, yet there lacks a systematic account of the conceptual definition, theoretical foundation, promising directions, and risks of language agents. This proposed tutorial aspires to fill this gap by providing a conceptual framework of language agents as well as giving a comprehensive discussion on important topic areas including tool augmentation, grounding, reasoning and planning, multi-agent systems, and rissk and societal impact. Language played a critical role in the evolution of biological intelligence, and now artificial intelligence may be following a similar evolutionary path. This is remarkable and concerning at the same time. We hope this tutorial will provide a timely framework to facilitate constructive discussion on this important emerging topic.

pdf bib
Introductory Tutorial: Reasoning with Natural Language Explanations
Marco Valentino | André Freitas

TODO

pdf bib
AI for Science in the Era of Large Language Models
Zhenyu Bi | Minghao Xu | Jian Tang | Xuan Wang

The capabilities of AI in the realm of science span a wide spectrum, from the atomic level, where it solves partial differential equations for quantum systems, to the molecular level, predicting chemical or protein structures, and even extending to societal predictions like infectious disease outbreaks. Recent advancements in large language models (LLMs), exemplified by models like ChatGPT, have showcased significant prowess in tasks involving natural language, such as translating languages, constructing chatbots, and answering questions. When we consider scientific data, we notice a resemblance to natural language in terms of sequences – scientific literature and health records presented as text, bio-omics data arranged in sequences, or sensor data like brain signals. The question arises: Can we harness the potential of these recent LLMs to drive scientific progress? In this tutorial, we will explore the application of large language models to three crucial categories of scientific data: 1) textual data, 2) biomedical sequences, and 3) brain signals. Furthermore, we will delve into LLMs’ challenges in scientific research, including ensuring trustworthiness, achieving personalization, and adapting to multi-modal data representation.

pdf bib
Human-Centered Evaluation of Language Technologies
Su Lin Blodgett | Jackie Chi Kit Cheung | Vera Liao | Ziang Xiao

Evaluation is a cornerstone topic in NLP. However, many criticisms have been raised about the community’s evaluation practices, including a lack of human-centered considerations about people’s needs for language technologies and their actual impact on people. This “evaluation crisis” is exacerbated by the recent development of large generative models with diverse and uncertain capabilities. This tutorial aims to inspire more human-centered evaluation in NLP by introducing perspectives and methodologies from human-computer interaction (HCI), a field concerned primarily with the design and evaluation of technologies. The tutorial will start with an overview of current NLP evaluation practices and their limitations, then introduce the “toolbox of evaluation methods” from HCI with varying considerations such as what to evaluate for, how generalizable the results are to the real-world contexts, and pragmatic costs to conduct the evaluation. The tutorial will also encourage reflection on how these HCI perspectives and methodologies can complement NLP evaluation through Q&A discussions and a hands-on exercise.